Architecture Requirements are Ilities

The business analyst(s) will collect the functional and system requirements. The architecture has other requirements. It is the job of the software architect to find and talk to the right people about them -- the system "ilities."

An "ility" is a characteristic or quality of a system that applies across a set of functional or system requirements. So, performance is an "ility" because it is applied against some of the functional or system requirements. Anything that can be expressed in the form "for a set of functional or system requirements, the system must fulfill them this way (this fast, this reliable, etc.)" is an "ility."

It is not always easy to help the users find all of the ilities that they are assuming apply to their current system or will apply to the new system. You might try using this list of ilities that commonly come up to jump start the conversation. Review it with your experts. Then, drive down to the details of what each of these mean in the context of the particular project. Assume that other ilities not on this list are out there, either because the list is incomplete, or because the project has some specific needs.

It is important to find as many of these and describe them as accurately and as early as possible. Since they describe ways that sets of functional requirements must be satisfied, they are effort multipliers to develop. So, for example, if a set of functions have to be secured, then the effort to secure a single function must be multiplied across each of the functions to be secured.

Also, find out how important each is on a 5 point scale from VL (very low), L, M, H, VH (very high). It is not a big surprise if most everything gets rated a M, H or VH. Try not to have all ilities come out as VH though. Ranking requirements this way sets up using a Relationship Matrix for evaluating architectural decisions. Also, it is sometimes worth gathering and ranking requirements with different user audiences to find out places where different audiences disagree about what is important. After all, it is hard to generate and evaluate architecture alternatives and get agreement if different audiences are looking for different things. A bit of values clarification can help keep the architecture process from getting stuck.

Here is my starter list. The first one are the Microsoft "Big Five" for the .Net architecture.

Starter List of "Ilities"

  • Performance
    • How quickly must the system respond to interactive operations of different kinds?
    • Are there different classes of interactive operations that users have different tolerances / expectations for?
    • Is there a batch window? What runs in it?
    • Do the batches have their own performance constraints, e.g., to clear the batch window before it closes?
    • Does the batch load influence any interactive users running at the same time?
    • Is there data with a high read/write access ratio that can be cached in memory at different tiers in the architecture?
    • What are the expected performance bottlenecks?
      • CPU?
      • Memory on client, server or intermediate nodes?
      • Hard drive space on each node?
      • Communications links?
      • DB
        • Access
        • Searching
        • Complex joins
      • Interaction with other internal systems?
      • Interaction with systems in other departments?
      • Interaction with partner systems?
      • Interactions with public systems?
  • Scalability
    • Peak load of how many users doing what kinds of operations?
    • Ability to grow to how many records in which critical database tables without slowing down related operations by more than X
    • Avoiding saturating a communication link that cannot be upgraded to a higher speed?
    • What dimensions can be scaled, e.g., more CPUs, more memory, more servers, geographical distribution?
    • Is the primary scaling strategy to "scale up" or to "scale out" -- that is, to upgrade the nodes in a fixed topology, or to add nodes?
  • Availability
    • What is the required uptime percentage?
    • Does this vary by time of day or location?
    • What is the current schedule of controlled outages? Is this acceptable, or is there a goal to improve it?
  • Reliability
    • Are there components with reliabilities that are known to be less than the required reliability of the system?
    • What strategies are currently in place to build more reliable capabilities out of less reliable capabilities?
    • What is the expected mean time to failure by failure severity by operation?
    • How will reliability be assessed prior to deployment?
  • Security
    • What operations need to be secured?
    • How will users be administered?
    • How will users be given permissions to access secured operations?
    • What are the different levels of security and how do these map
      • Security by operation
      • Security by type of object
      • Security by instance of object
  • Maintainability
    • Are there concerns about the ability to hire appropriate technology skills, attract them to the area at reasonable prices?
    • What kinds of changes are anticipated in the first rounds of maintenance? What are their relative priority?
    • What sort of regression testing is required to ensure that maintenance changes do not degrade existing functionality?
    • What sort of maintenance documentation is expected to be produced? When?
  • Flexibility
    • Is there system behavior that needs to be changed regularly without program changes?
      • Can this be encoded in the database?
      • Are there run-time rules that can be handled using a rules interpretation engine?
      • Are there functions that should be user scripted? If so, how will these be QA-ed?
  • Configurability
    • What parameters need to be set on a machine-by-machine basis?
  • Personalizability
    • What aspects of the system can be customized on a per-user basis?
    • How does the user change these settings?
    • What is the strategy for defaults?
  • Usability
    • Are there operations that need to be done as quickly as possible, so that user gestures should be minimized?>
    • Are there difficult or occasional-user operations that require non-standard presentations to help the user perform correctly?
    • What is the balance between data integrity and the ability to stop in a "work in progress" state?
    • What styles of validation are used in what situations?
    • What metaphors from existing or parallel systems should be used?
    • What sort of training deliverables are expected?
    • What sort of on-board help system is expected?
  • Portability
    • Data portability between this system and other systems?
    • Portability across different versions of a single vendor's DB?
    • Ability to port to a different vendor's DB? Which one(s)? When?
    • Browser portability? What browser versions? Historical and future?
    • Operating system portability?
  • Conformance to standards
    • What legal standards apply?
    • What technical standards apply?
    • Other standards, e.g., 508.1 for disabled users?
    • What development standards apply?
      • Database naming standards
      • Existing internal architectural standards (e.g., everything goes in an Oracle database)
      • Language and coding standards
      • Testing and review standards
      • Presentation standards, e.g., use of standard colors, controls or other affordances?
      • Lifecycle models or methodologies
  • Internationalizability
    • What languages?
    • In what order?
    • How translated?
    • Single or multi byte character sets?
  • Efficiency -- space and time
  • Responsiveness
    • What are the expected and upper limit response times per operation in the system?
    • What is the trade-off between lower averages and wider variations in response time?
  • Interoperability
    • What systems will this system interoperate with immediately?
    • What other systems are anticpated?
    • What classes of internal and external systems might later be needed to interoperate with?
    • What functionality from this system needs to be exposed as a service in a service oriented architecture?
    • What functionality from this system needs to be exposed as a Web service or via a portal?
  • Upgradeability
    • Do the servers need to be upgraded while running?
    • How many client stations need to be upgraded, and what are the costs and mechanisms for upgrading them?
    • How often do different kind of fixes need to be distributed? Are there "hot fixes" that have to go out right away, but others that can wait? How often do each kind occur?
  • Auditability / traceability
    • What record of who did what when must be maintained?
    • For how long?
    • Who accesses the audit trails?
    • How?
    • Is archive to tape or other off-site storage media required?
    • Is "effective dating" required?
  • Transactionality
    • What are the important database and application transaction boundaries?
    • Is standard "optimistic" locking appropriate, or is something more complex required in some or all cases>
    • Is disconnected operation required by any node?
  • Administrability
    • What live usage information needs to be displayed?
    • To who? How? When?
    • What "live" interventions are required?
    • What ability to handle remote configurations are required?
    • Are there existing application management consoles that will be used to manage this application?
  • Lots of others -- what are your favorites?

Kano's Kinds of Requirements

There is lots of useful stuff to steal from the Quality Function Deployment (QFD) folks. Kano defines three kinds of requirements:

  1. Exciting - "Over the top" things that, if they could come true, would radically change / improve the system. Exciting requirements can change the game -- but you have to fulfill the other kinds of requirements first, or demonstrate how fulfilling the exciting requirements make the other requirements unnecessary.
  2. Regular - These are the standard things that you will get told if you just ask the question. You will probably get most of these if you just apply the standard attention to detail.
  3. Expected -- These are the tricky ones. When the users give you the regular requirements, they are telling you things that they assume you don't know. The expected requirements are the unspoken things that they can't imagine anyone not knowing. You have to find these, because fulfilling the regular or exciting requirements does you no good if you miss an important expected requirement. It can be a real end-of-project deal killer to discover an unfulfilled expected requirement during UAT or system testing. Also, the audience for these requirements may not be the traditional business users. Talk to the DBAs and the system administration staff as well.