Software Processes and Management Notes

Note: These notes are a mixture of lecture material, textbook material and private research. All credit to the University of Melbourne.

Overview of Subject

Part I - Introduction to Software Engineering

  • What is Software Engineering/Computer Science?
  • Why do we need Software Engineering?
  • Project Management
    • Definition
    • Managing (control & monitoring)
    • SW processes

Part II - Controlling Disciplines:

  • Processes - increase chance of success, reduce risk
  • People and Teams - what motivates people
    • Power and Governance
  • Project Plan/Schedule
    • Work breakdown structure
    • Analyse dependencies
    • Determine resources needed and estimated time to complete
    • Critical path and critical activities
  • Configuration Management
    • Ensure all artefacts are consistent with each other

Part III - Monitoring Disciplines:

  • Metrics, Cost and Estimation
  • Risk management
  • Quality Assurance


  • Chapter 1: Introduction to Software Engineering / Development
  • Chapter 2: An Introduction to Software Processes and Project Management
  • Chapter 3: Software Development Life Cycle Models
  • Chapter 4: Governance, Teams, People, and Human Resources
  • Chapter 5: Planning and Scheduling
  • Chapter 6: Configuration Management
  • Chapter 7: Metrics, Cost and Estimation
  • Chapter 8: Risk Management
  • Chapter 9: Quality Assurance

Not examinable

  • Week 7 lecture: Software PM Tools and Techniques
  • Week 6 guest lecture (audio)
  • Detailed Function Point Analysis calculations
  • Detailed COCOMO calculations

Chapter 1: Introduction to Software Engineering / Development

Software Engineering is not the same as Software Development

Software is a solution and a problem.

General engineering steps to problem solving:

  1. Identify problem
  2. Analyse problem
  3. Derive solutions
  4. Choose solution
  5. Realise solution

A successful software system should solve a bigger problem than it creates.

A software system is a set of programs that are inter-connected or related.

Software designs often rely on APIs that allow interaction with hardware platforms without worrying about the details of the interaction. A “systems approach” to design is required for applications involving hardware/networks. Most software design requires consideration into latency of networks, communication with external devices and interaction with other systems.

Software engineering is the application of engineering principles to the development and maintenance of software, as well as the study of how this can be done.

Software Engineering Body of Knowledge (SWEBOK) areas:

  1. Software Requirements
  2. Software Design
  3. Software Construction
  4. Software Testing
  5. Software Maintenance
  6. Software Configuration Management
  7. Software Engineering Management
  8. Software Engineering Process
  9. Software Engineering Tools and Methods
  10. Software Quality

Aspects to software engineering:

  1. Computer science theory
  2. Processes
  3. Project management
  4. Planning and measurement techniques
  5. Experience

Computer Science vs Software Engineering

Computer science is the theoretical foundation on which software engineering is built.

Inherit difference between the two is complexity.

Computer science → small-scale problems regarding certain computation problems

Software engineering → build and manage large-scale systems

Software engineers use a top-down approach to gather understanding at a high level and break down into smaller, more manageable, problems. Bottom-up approach then required to solve smaller problems using computer science.

Software Engineering (SE) vs Other Engineering

Common Traits in all Engineering

  • Building reliable products to solve problems
  • Use of science, maths and empirical knowledge
  • Large teams building large-scale products

Where SE is different

  • Age - new discipline
  • Cost - % engineering cost higher (minimal material cost)
  • Flexibility - changes often requested
  • Innovation - software can be replicated, therefore most engineering problems are new
  • Domain specificity - highly relevant
  • Complexity - amount of variables/interactions
  • Reliability - more unreliable

Software engineering becoming more important as increases in computing power leads to more complex software.

SE uses 4 main disciplines:

  1. Controlling
  2. Monitoring
  3. Analysis
  4. Synthesis

Reasons for failed projects

Reason %
Misunderstanding Requirements 53%
Design Failures 22%
PM failures 13%
Other 12%

Programming becomes more complex with computing power.

Over last 50 years, failures of projects has remained similar, but the complexity of projects has increased dramatically.

Chapter 2: An Introduction to Software Processes and Project Management

A model is a simplified description of an entity or process.

Models used to help understand a complex system.

A process is a set of ordered activities, containing inputs, outputs, activities and resources, enacted for the purpose of achieving a goal.

Most SE requirements processes have the following activities:

  1. Elicitation
  2. Analysis and Modelling
  3. Specification
  4. Validation

Model of ideal process

Model of ideal process

Model of realistic process

Model of realistic process

Process overview

Process overview

A software lifecycle model describes the lifecycle of a software project from conception through to maintenance.

Goal of a lifecycle is to construct and deliver a quality product.

Use empirical evidence to decide which lifecycle model is appropriate for which kind of project.

Project Management

Project Management is the application of skill or care in the manipulation, use, treatment, or control of a collaborative enterprise that is carefully planned to achieve a particular aim.

Projects managed used control and monitoring.

Control → implementing decisions

Monitor → making decisions

Main struggle in project management is managing humans.

Monitoring and Controlling

Monitoring and Controlling

Chapter 3: Software Development Life Cycle Models

Software Development Life Cycle (SDLC) models: The over-arching processes in software engineering.

Difficulty in understanding complex software systems: behaviour of the whole is highly dependent on the individual parts, and each part is highly dependent on other parts.

Modern types of software systems:

  • Hosted applications - run on servers managed by developers (SaaS). Developers have high level of control
  • Mobile Applications - installed on mobile device, access remote server for data and business logic, device provides basic UI
  • Embedded applications - run on electronics devices like wearables and cars. Traditionally installed once, but now some update over-the-air (OTA)

The understanding of a software system depends on our understanding of the computer system executing it. The software itself is intangible and we only see a representation of it, such as program listings or UML. The only real way of engineering software, then, is engineering the process.

Two broad categories:

  • Formal SE - control all aspects, prescriptive processes
  • Agile SE - control outcomes, reactive processes

A process model is a template for a process that shows generic activities, input and outputs.

The process model describes the ideal process to be followed.

Software projects usually consist of the following activities or phases:

  • Requirements
  • System/architectural design
  • Detailed design
  • Implementation
  • Integration
  • Testing
  • Delivery and release
  • Maintenance

Each phase produces artefacts (many of which are used as inputs to other phases).

Usually start with some form of SDLC model, then modify/add to it if required.

Iterative model with added steps of reliability evaluation and random


Each phase can be broken down into sub-phases. We focus on the within phase processes.

Sub-phase processes

Sub-phase processes

Formal Software Development Life Cycle Models

Waterfall Model

Assumes each phase needs to be complete before moving to the next one. The creator actually advocated that it’s a flawed model and needs to be modified for real application. The stages can be combined in different ways to create more effective processes.

Classic Waterfall Model

Classic Waterfall Model

Each phase produces an artefact:

  • Requirements package
  • Software design
  • Code Base
  • Test plan, cases and reports

Allow developers to:

  • Measure progress
  • Evaluate quality

Decomposition of the requirements phase

Decomposition of the requirements phase

Good for projects with:

  • well understood requirements
  • clear goals

Why it’s good:

  • good model for estimating project costs
  • tracking progress
  • easy to understand and apply

Projects it’s bad for:

  • changing technology
  • uncertain requirements

Why it’s bad:

  • doesn’t take into account technological and domain risks
  • doesn’t allowed for iterations required to understand domain
  • not enough feedback to client
  • testers can’t start until implementation is done

Modified waterfall to support unforeseen circumstances

Modified waterfall to support unforeseen circumstances


Originated from needing to develop better testing processes.



Incremental and Iterative Models

Aim to deal with uncertainty and changing project environments.


Divide development into fixed increments, each involving planning, requirements, design, implementation and testing. Each increment can follow a mini waterfall or some other method.

Key requirement: each increment develops a complete usable subset of the system functionality that can be deployed.

Incremental model

Incremental model

  1. First set of planning needs to be detailed enough for a high-level system design and to divide functionality up
  2. Architecture often produced early to be able to integrate increments
  3. Each evaluation produces some requirements for next increment
  4. Requires good configuration management


  • Manage risk of changing and uncertain environments by releasing early and often (get client feedback)
  • Can correct earlier misunderstandings and learn more about the problem domain


Development broken into a number of iterations.

Iterations have purpose of:

  • Refining and improving requirements, design and implementation based on feedback and testing
  • Adding new functionality to evolving system

Also manage risk of changing and uncertain requirements through early and frequent feedback.


  • Incremental → different parts developed at different times/rates and integrated
  • Iterative → reworking the development strategy each iteration

Spiral Model

Type of iterative model.

Each iteration has a distinct set of activities designed to manage risk. Also provides opportunities to get better understanding of domain throughout project.

Spiral Model

Spiral Model

Prototyping phase is essential to gathering requirements, getting client feedback and evaluating alternatives. The prototyping phase is used to produce the concept of operations (how the system should work at a high-level, from a user’s perspective).

Projects it’s good for:

  • mission critical projects

Reasons it’s good:

  • can re-evaluate project direction each iteration of spiral
  • continuously manage risk

Reasons it’s bad:

  • hard to manage

Agile Approaches to SE

Agile has come about from dissatisfaction with overheads in formal processes. Agile methods:

  • focus on the code
  • based on iterative approach
  • deliver software quickly
  • evolve working software to changing requirements

Key principles:

  • customer involvement
  • incremental delivery
  • people over process
  • embrace the change
  • maintain simplicity

Note: Agile Manifesto and 12 Key principles covered later on.

Many agile methods exists. Some of the more popular are covered.

Extreme Programming (XP)

Departure from formal processes. Brings people together to focus on quality code. Short iterations (2 weeks max).

Some key principles:

  • Test First Development - only accepting build if all tests pass
  • Re-factoring
  • Pair programming
  • Continuous integration
  • Sustainable pace
  • Onsite customer

Flow of activities

Flow of activities


  • onsite customer is expensive
  • still requires a lot of discipline to follow (especially refactoring and onsite customer)


Iterative, incremental methodology. Most popular in industry. Short development iterations are called sprints (2-4 weeks).

Flow of activities

Flow of activities

Scrum Team:

  • Product Owner → represents stakeholders/customer, write user stories and product backlog
  • Scrum Master → facilitates the sprint, responsible for product goals and deliverables, resolves impediments dev team has, ensures scrum framework is followed
  • Dev Team

Daily scrum (time limit of 15 mins):

  • What did you do yesterday?
  • What will you do today?
  • Are there any impediments in your way?

Other meetings (max time limit ~3 hrs):

  1. Sprint review → review completed work and present to customer
  2. Sprint retrospective → reflect on sprint and discuss what went well/poorly

Anatomy of a Process

Generic pattern for a phase in a process

Generic pattern for a phase in a process

The output of a process is optional.

Quality gate → may be a technical review or a set of tests to perform on the output, ensures output is fit for purpose.

Example of requirements phase with resources

Example of requirements phase with resources

Chapter 4: Governance, Teams, People, and Human Resources

4.1 Understanding People

Studies in organisation psychology include:

  • motivation
  • influence
  • power
  • effectiveness


Two theories:

  1. Maslow
  2. Hertberg


Maslow's Hierarchy of Needs

Maslow’s Hierarchy of Needs

  • It is at the top, that people are problem-focused, appreciate life, concerned about personal growth.
  • To motivate a project team means that Project Managers (PMs) must understand each individual’s needs, usually social, esteem and self-actualisation i.e. they need to be aware of the team’s personal lives alongside their professional ones.


  • Motivational factors vs Hygiene factors
    • Salaries, job sec, work environmentshygiene factors - they don’t motivate, but a lack of them causes dissatisfaction
    • Achievement, the work itself, recognition, responsibility and personal growthmotivational factors.
  • Be aware that BOTH these sets of factors need to be controlled separately.


Nine influence bases available to project management to influence the project:

  1. Authority - right to issue orders by virtue of position
  2. Assignment - management’s perceived ability to influence workers’ future work assignments
  3. Budget - management’s perceived ability to authorise other’s use of discretionary funds
  4. Promotion - ability to improve a worker’s position
  5. Money - ability to increase a worker’s pay
  6. Penalty - management’s perceived ability to hand out penalties/punishments
  7. Work challenges - ability to assign work that uses a worker’s enjoyment of certain tasks
  8. Expertise - management’s perceived special knowledge that others deem important
  9. Friendship - ability of people in management to make friends with others

Projects in which management relies too heavily on authority, money and penalties to influence people are more likely to fail.


Ability to influence people’s behaviour to get them to do what they would not otherwise do.

Types of power:

  1. Coercive - similar to penalty [bad idea]
  2. Legitimate - similar to authority [use with caution]
  3. Expert - similar to expertise
  4. Reward - similar to promotion, money, assignment
  5. Referent - based on personal charisma. People hold someone in very high regard and will do what they say based on their high regard for the person. It’s rare.

Leaders have a style which is somewhere on a spectrum, with the following two being the opposite ends: a) Task-focused - don’t bother much with human relationships b) People-focused - don’t bother with mechanics of administration

Improving Effectiveness

The Seven Habits of Highly Effective People - by Stephen Covey


  1. Be proactive
  2. Begin with the end in mind - visualise the outcome and work out how to get there
  3. Put first things first - focus on importance more than urgency.


  1. Think win/win
  2. Seek first to understand, then to be understood - empathetic listening
  3. Synergise - value differences in others. “a champion team is better than a team of champions”
  4. Sharpen the axe - renew yourself physically, mentally and socially; avoid burnout.

4 and 5 set good project managers apart.

Rapport is a relationship of accord, or affinity.

4.2 Building Teams

Reasons for teaming up

  1. Security
  2. Task complexity
  3. Social interaction
  4. Physical proximity
  5. Exchange (cost vs benefit)

Norms and Roles

A way of looking at norms and roles in a team:

  • Norm → cultural rule that is observed or broken
  • Role → set of expected behaviours

Norms can be formal (hard rules) or informal (more cultural).

Consequences of breaking formal rules are clearly set out, but breaking informal norms could result in:

  • non-verbal consequences - e.g. sneers, disapproving expression
  • verbal - e.g. explicit criticism
  • physical - e.g. shoving, hitting

Role in a team can be categorised like so:

  1. task roles - get work done
  2. maintenance roles - for team effectiveness
  3. destructive roles - make it harder for team

Effective groups have a good mix of task and maintenance roles, and minimal/non-existent destructive roles.

Task roles Maintenance Roles Destructive Role
Initiator Encourager Blocker
Information seeker Harmoniser Recognition seeker
Information giver Standard setter Dominator
Coordinator Follower Avoider
Evaluator Group observer Free Rider
    Lone wolf


Most teams go through five stages of development:

  1. Forming
  2. Storming
  3. Norming
  4. Performing
  5. Terminating


Team organisation often depends on culture, mix of people, problem to be solved and rigidity of delivery date.

Controlled Centralised (CC)

  • Clearly defined leadership at all times
  • Hierarchy of sub-teams
  • Sub-teams have leaders that report to management
  • Decisions made by management in consultation with team leaders, and passed on to team through leaders
  • In large teams, communication tends to be vertical not horizontal

Controlled Centralised

Controlled Centralised


  • Scales effectively, good for large teams
  • Easy to manage because of well defined structure
  • Produces reliable and robust products


  • Effective life-time of teams shortened
  • Increased communication overheads

Good for:

  • Short, tough deadlines (progress and risks can be actively monitored and acted upon)

Controlled Decentralised (CD)

Defined team leader at all times, but places more control in the hands of sub- task leaders and encourages horizontal communication. Shares similar attributes to CC, but gives more autonomy to teams.

Controlled Decentralised

Controlled Decentralised

Democratic Decentralised (DD)

No permanent leader. Task coordinators are for short duration and change in different phases. Requires lots of horizontal communication.

Quality and reliability in this structure can be low due to the quick turnaround of task coordinators, and not having a permanent team leader to perform quality assurance.

Democratic Decentralised

Democratic Decentralised


Team of well-trained, motivated experts. Good for short prototype phase or difficult problems in a project.



Chief Programmer Team (CC)

Built around a highly skilled/experienced chief programmer who coordinates all technical activities.

Chief Programmer Team

Chief Programmer Team

Extreme Programming (XP) Team

  • May or may not be a team leader
  • Programmers work in pairs
  • One programmer to write all tests
  • Feature driven

Scrum Team

  • Controlled structure with democratic decentralised sub-team
  • Scrum Master is leader but doesn’t control team on day-to-day basis, just picks teams
  • Teams assign themselves work
  • Product owner responsible for assigning priority of requirements



Strengths and Weaknesses of CC, CD, DD

Strengths and Weaknesses of CC, CD, DD

Chapter 5: Planning and Scheduling

Project Plan

Combines the project, process and people.

A project plan must define the tasks, and for each task define:

  • duration
  • dependencies
  • people
  • physical resources
  • milestones/goals

Making an infeasible project plan is a large risk in projects. The main reason projects go over-time and over-budget relate to the project plan:

  • unrealistic deadlines
  • changing requirements
  • honest underestimates of effort
  • unaccounted for risks
  • technical difficulties
  • human difficulties
  • failure to see slippage
  • miscommunication

Planning and analysing processes early is crucial.

If client cannot alter schedule, consider iterative or incremental models (management structure supports shorter deadlines and phased delivery).

Basic Principles to Project Planning

  1. Compartmentalise - decompose (product and processes) until manageable
  2. Interdependency - tasks may have to occur sequentially
  3. Time Allocation - number of resources or effort, start and end dates, part-time/full-time
  4. Effort Validation - ensure only the available number of resources have been allocated
  5. Defined Responsibilities - every tasks needs a team member assigned to it
  6. Defined Outcomes - typically work products
  7. Defined Milestones - every task associated with a defined milestone

People and Effort

Men and months are interchangeable commodities only when a task can be partitioned among many workers with no communication among them… it is not even approximately true of systems programming.

Using man-months can be misleading.

Adding more people to a project can actually cause more delays due to:

  • general disruption
  • unfamiliarity with the system (time taken from original people to train)
  • number of communication channels increasing

Putnam-Norden-Rayleigh curve

Putnam-Norden-Rayleigh curve

Effort cost is in person months

E_d = effort based on project resources available

T_d = nominal delivery time estimated by schedule

T_o = optimal delivery time in terms of cost

T_a = actual delivery time

E_o = m * ( (T_d)^4 / (T_a)^4 )

Moving left of T_min puts project at very high risk of failure

Moving right of T_o needs to be carefully weighed against business opportunities/cost

Work Breakdown

Choosing a lifecycle model is the first step in developing a project plan. From the lifecycle model we get a high-level list of tasks that can be further compartmentalised into a work breakdown structure.

Work Breakdown Structure - Map

Work Breakdown Structure - Map

Work Breakdown Structure - List

Work Breakdown Structure - List

100% Rule

Work breakdown structure includes 100% of the work defined by the project scope.


May exist between tasks for the following reasons:

  • A task relies on a work product produced by another task
  • A task relies on a work product to be in a specific state before it can commence
  • A task needs the resources used by another task

Dependencies force sequencing on the set of tasks.

The result of this sequencing is a task network.

Task Network

Task Network

Project Schedule

Already have tasks and dependencies, now need to work out:

  • time estimates (project length)
  • resources needed (cost)

This will make up our project schedule.

Two kinds of graphical notation for project schedules:

  1. activity charts (like task network, PERT chart) - shows critical path
  2. bar charts (Gantt chart) - shows schedule of tasks against calendar time


  • milestone - represents completion of an activity or delivery of work- product (takes zero time)
  • activity - part of project that requires resources and time
  • free float / free slack - amount of time a task can be delayed without causing delay to subsequent tasks
  • total float/ total slack - amount of time a task can be delayed without delaying project completion
  • critical path - longest possible continuous path from initial event to terminal event. Determines total calendar time for project. Delays along critical path will delay project by at least that amount
  • critical activity - activity with total float equal to zero (not necessarily on the critical path)

PERT Charts

Program Evaluation and Review Technique charts.

Represent the project schedule as an activity network. Ideal for early stage planning as they were designed for uncertainty.

Make estimates of project durations by decomposing the project into tasks and dependencies. Each node in network is a model of the task from the work breakdown. Edges model the dependencies between tasks.

PERT charts make uses of bounded uncertainty in the duration of tasks as part of the analysis. A typical analysis involves the following:

  • predecessor node - a node immediately preceding another node without any intervening
  • successor node - a node immediately following another node without any intervening
  • optimistic time (O) - minimum possible time required to accomplish a task
  • pessimistic time (P) - maximum possible time required to accomplish a task
  • most likely time (M) - best estimate (assuming everything proceeds as normal)
  • expected time (TE) - average time task would require if repeated multiple times over extended period

TE = (O + 4M + P)/6

An aim of a PERT analysis is to allow project managers to do scheduling trade-offs and monitor project progress. This involves calculating the following:

  • earliest start time (ES)
  • latest state time (LS)
  • earliest finish time (EF)
  • latest finish time (LF)
  • slack time

Calculating ES and EF → forward pass through network

Calculating LS and LF → backwards pass through network

Earliest start/finish process (forward pass):

  1. Estimate expected time (TE) of all tasks
  2. Fill ES and EF for “no dependency” tasks. ES = 0, EF = duration
  3. Fill ES and EF for “successor” nodes. ES = max EF of predecessors
  4. Repeat until no activities left

Latest start/finish process (backward pass):

  1. Identify activities with no successor nodes. LF = final day of project, LS = LF - duration
  2. Identify predecessor nodes. LF = minimum LS of successors
  3. Repeat until no activities left

Data associated with each task is modelled as a node.

Pert Chart Node

Pert Chart Node

Free Slack = LF - EF

Gantt Charts

Show same information, but show duration of each activity. Can also fit more information on page due to tabular form.

Gantt charts are NOT used for project design and defining work breakdown structure.

  • Process design → choosing a set of processes to meet a goal
  • Work breakdown structure → defines the tasks that need to be done as part of the process

Gantt charts are used to represent this design as a project schedule. Only focus on schedules, not on scope or cost. Can show dependencies and resources, as well as progress against calendar time

Critical Path Methods

  • A path is a sequence of consecutive nodes.
  • The total duration is the sum of the durations of the nodes in the path.
  • The critical path is the path with the longest duration.
  • The overall duration of the critical path estimates the total time the project will take.
  • Activities on the critical path have a free slack of 0.
  • Any delay in starting or finishing an activity on the critical path with delay the completion time of the project

To reduce the time taken for overall project, the critical path must be reduced. Can do these by adding resources to activities on critical path to:

  • remove some dependencies between activities on critical path
  • shorten the durations of activities

This is often referred to as crashing the project plan.

Project Tracking and Control

Ways of tracking the project plan:

  • period reviews where team members report
  • check if milestones have been accomplished by scheduled data
  • compare actual start dates to scheduled start dates
  • hold meetings with engineers to get subjective assessments of progress
  • using formal methods, like earned value analysis

The aim of tracking the project is to exercise control.

If there are problems, they must be analysed and control must be exercised to reconcile the problems. Often resources need to be shifted around and changes to the project plan have to be made.

Planning in Agile Development

In agile, detailed planning on not undertaken until the start of an iteration. Agile processes are designed to handle change. Prioritised list of requirements maintained, but requirements can be added, removed, or changed at any time. Plans to build the requirements are not made until they intend to be built, as the requirements may change. This makes it more difficult to estimate completion time of a project, but results in less wasted planning time. Planning of iterations is done at a requirement level, not an individual task level. E.g. “Complete requirement X”, not “Design X” / “Implement X”.

Gantt charts and PERT charts are not seen to be useful in Agile.

Rules used in Agile:

  • Plan short iterations - gives team a measurable progress indicator and functionality is regularly delivered
  • Produce useful functionality
  • Use “Just in time” (JIT) planning
  • Use the team - all members involved in work scheduling and assignment
    • feeling of belonging
    • team members have to implement, so are invested in good planning
    • team members know strengths and preferences

Dangers of Agile “propaganda”

  • Agile doesn’t mean “no planning” or “no other lifecycle model is appropriate”
  • Agile approach less likely to be effective in systems where safety, reliability and security are important factors.
  • Agile methodology less suited to geographically separated teams

Extra Chapter: Agile Projects and Big Data Projects

  • What is trending in IT? → Big Data
  • What is happening to products? → Quick Pivots
  • Why DO products? → Strategic initiatives, team building
  • When is an Agile project a good choice? → When requirements continually emerge or need to respond to change

Digital Power

Digital Power Growth rates:

  • Computing Doubles every 18 months
  • Communication Doubles every 9 months
  • Storage Doubles every 12 months
  • Content 2_N

Digital power is experiencing an exponential rate of growth that reduces computing and networking costs 95-97% every ten years.


Data growth is exponential.

Data can be:

  • structured
  • unstructured
  • geospatial

Technical challenges of big data:

  • store volume
  • integrate a variety of messy, scattered data
  • manage velocity

Value challenges of big data:

  • data becomes irrelevant quickly
  • low density/redundant information
  • knowing the potential customer


Want to identify specific consumer groups to:

  • target with novel products
  • exploit transient trends
    • shorter interval product lifecycle
    • re-purpose quickly (pivot)
  • constantly respond and adapt product

Products that Pivot quickly provide:

  • flexible services
    • high internal cohesion
    • black box encapsulation
    • abstract interface
      • mix-n-match services into new products
  • interoperable services
    • integrate with 3rd part services
    • dissimilar technologies work together dynamically
  • adaptive

Flexible Software Design

Service Oriented Architecture (SOA) → mix abstract software services (vertical integration)

Service Oriented Architecture

Service Oriented Architecture

Are Projects still relevant?

  • Projects have a longer planning horizon than products
    • allows strategic initiatives
    • allows risk mitigation
  • Provide a team environment
    • rewards of collaboration

Think big, act small.

Statistics on different models

Statistics on different models

Complexity Measures

Can solve problems by breaking them up into smaller problems and solving them independently.

In an IT context we can do this by utilising:

  • Low coupling of features - reducing impact of change through components having minimal knowledge of each other

System with loose coupling

System with loose coupling

  • Dependency injection (architectural pattern) - passing dependencies to classes instead of having them create them (ability to mix-n-match features). Applications can configure and use 3rd party code remotely/dynamically at run time
public class Usercontroller {
    private IUserService userService;

    public UserController (IUserService userService) {
        this.userService = userService;

Example: Large Complex Agile Project - Candy Crush

  • Use Scrum project method
  • 70 big data analysts
  • 70 scrum teams
  • 2 week sprints


Development team

Usual team:

  • backend
  • frontend
  • quality assurance

Specialist team:

  • designer
  • UX
  • software architect
  • IT

Scrum Team

Scrum Team

Project Management Plans (PMP)

Formal PMP Lifecycle

Formal Life Cycle Models - “Deliver a product eventually” Formal Life Cycle Models - “Deliver a product eventually”

Agile PMP Lifecycle

Agile PMP Life Cycle → “Deliver frequent small chunks of product”

Agile PMP Life Cycle → “Deliver frequent small chunks of product”

All projects have a “project charter”.

Inception Stage

  • Recognise end goal
  • Define protocol to communicate
  • Stakeholder management
  • Formal approval
  • Follow an accepted project lifecycle

Delivery Date

  • Demonstrate understanding of requirements
  • Build a dedicated and focused team
  • Share and manage a schedule

Different Perspectives of Scope

Scope in Waterfall vs Agile

Scope in Waterfall vs Agile

Agile Stages

  • Envisage
  • Speculate
    • High level features
  • Explore
    • End of time
    • Project managers role
  • Adapt
  • Close

Agile Stage

Agile Stage

Envision Stage

  • Project Charter
  • Project Tool Set
  • Project Risk Register

Speculate Stage

  • Backlog features
  • Placeholder for conversations
  • Organise priorities

Plan for the next stage. Create:

  • Iteration plan
  • Milestones
  • Release plan

Explore Stage

  • The sprint
  • Conversations and collaborations
  • Explore → code!
  • End sprint on schedule, not when all features done
  • Establish team’s velocity

Scrum Master → project manager taking on the role of an observer.

  • Self organising teams
  • Visual progress on display
  • Everyone knows status
  • Nowhere to hide!

Adapt Stage

  • Be open and critical
  • Brainstorm important issues
  • Everyone has voice
  • Collect multiple alternative solutions to problems
  • Vote on solution to be adopted

Agile Manifesto

We are uncovering better ways of developing software by doing it and helping others do it.

  • Individual and interactions over processes and tools
  • Working software over comprehensive documentation
  • Customer collaboration over contract negotiation
  • Responding to change over following a plan

“While there is value in the items on the right, we value the items on the left more.”

The 12 principles:

  1. Satisfy customer through early and continuous delivery
  2. Welcome changing requirements
  3. Deliver frequently
  4. Work together daily
  5. Build projects around motivated individuals and develop trust
  6. Utilise face-to-face conversation
  7. Primary measure of progress is working software
  8. Sustainable development (constant pace)
  9. Attention to technical excellence and good design
  10. Emphasise simplicity, maximise work not done
  11. Use self-organising teams
  12. Regularly reflect on how to be more effective

Agile “Tribes”

The Agile Tribes

The Agile Tribes

Common culture

  • Visual task board
  • Burn Down chart
  • Burn Up chart
  • Test Driven Development


  • Design phase can be less vigorous
  • Refactoring time slot can be overtaken by new initiatives

Plus/minus/interesting of the Tribes:

Scrum - has sprints and self organising teams:


  • Deliver chunks of stable and packaged code
  • Time boxed sprint fosters design opportunity


  • Maintenance of code harder (less documented)
  • Quality less trusted
  • Contracting and subcontracting less supported


  • Meetings surrounding sprints can become “ceremonial”


Extreme programming - prescriptive:


  • Deliver chunks of simple code quickly


  • “TODO” list of features without a design structure


  • Pair programming


Kanban - visual:


  • Widely adopted
  • Efficiently deliver code “Just in Time”
  • Intuitive
  • Smooth Work Flow


  • Production line of features without design frame

Agile features

Agile features




Kanban philosophy:

  1. Start with what you do now
  2. Pursue incremental, evolutionary change
  3. Respect the current standard way

Kanban methods:

  1. Visualise workflow - “swimlanes” with User Story cards (represent work tasks)
    1. “todo”
    2. “doing” - associate a team members name to card, only one item per member
    3. “done”
  2. Limit Work in Progress (WIP)
  3. Manage flow
  4. Make policy changes explicit
  5. Collaborate - create area for team social interaction

A more specific work flow:

  • “todo”
  • “ready”
  • “in process”
  • “done”

Scrum Philosophy

  • Deliver high value in short period of time
  • Business sets priorities, team self-organises the delivery
  • Strive for maximum stability on user stories during each sprint
  • Measure progress in working software
  • Each sprint showcases working software to interested stakeholders

Scrum Methods


  • Product owner
  • Scrum Master
  • Development Team


  • Daily Stand Up
  • Sprint Planning
  • Sprint Review
  • Sprint Retrospective


  • Product Backlog - task list in priority order
  • Sprint Backlog - tasks selected for project release
  • User Stories - as a [user], I want [goal] so that [reason]
  • Burndown Chart - amount of work done
  • Burnup Chart - amount of work remaining

Why user stories?

  • Easier to communicate with users
  • Simplified plan
    • avoids locking in design detail too early
    • never out of date, just in time
  • Product backlog made up of Epic User Stories



  • large initiatives delivering new services
  • collection of features


  • capabilities product owner values
  • value realised by multiple user stories

User story:

  • planning item
  • conversation placeholder

Create Epics from high-level features in the PMP Speculate Stage

Common Mistakes

  • Too much detail
    • Can result in skipped conversations
    • Risk moving in wrong direction
    • Overlook specific customer needs
  • Technical tasks impersonating user stories
    • Doesn’t actually represent what the user wants

Scrum Backlog

Scrum Backlog

Narrative Overview

Provide a non-technical reader an overview of project and solution.

  • Build trust with client
  • Demonstrate you have understood the case study
  • List all the features gathered

Product backlog:

  • Assume scrum role of Product Owner
  • Groom Product Backlog and list in order, from highest value to lowest value
  • Ordered list determines the scope of future projects
  • Low priority user stories don’t get done

Solution Overview

Sprint Backlog:

  • Describe proposed release
    • select from product backlog into sprint backlog
  • Identify boundary between what features are included and excluded
  • Use diagrams


  • Use a visual Kanban process to show how many user stories get done over a time-boxed sprint
  • Only complete user stories are counted
  • Establish a reliable velocity over a number of sprints

Scrum Burn Down

Track remaining effort.

  • Y-axis = Story Points remaining
  • X-axis = Elapsed time

Doesn’t necessarily show work added to scope.

Scrum Burn Up

Work left to do is shown.

  • Y-axis = Total Story Points
  • X-axis = Elapsed time

Able to predict when the release will be done.

Scope changes are made explicitly visible.

Scrum Burndown and Burnup charts

Scrum Burndown and Burnup charts

Chapter 6. Configuration Management

A software project generates a large number of artefacts. For example:

  • use-cases
  • class diagrams, collaboration diagrams, activity diagrams, state charts that:
    • model the problem domain
    • specify the design
  • code modules
  • test cases, testing reports and testing scripts
  • documents
    • PMP
    • test plan
    • configuration management plan

A configuration is the sum total of all artefacts, their current state and the dependencies between them.

The problem is change. Need to make sure changes don’t leave the configuration in an inconsistent state.

The aim of Software Configuration Management (SCM) is to establish processes and set up repositories to manage change properly, without losing overall consistency. It addresses the following questions:

  • How do we manage requests for change?
  • What and where are the software components?
  • What is the status of each software component?
  • How does a change to one component affect others?
  • How do we resolve conflicting changes?
  • How do we maintain multiple versions?
  • How do we keep systems up to date?

SCM processes typically have the following aims:

  1. Identification
  2. Version control
  3. Change control
  4. Configuration auditing (consistency checked)
  5. Configuration reporting (status is reported)

1. Identification

The configuration consists of configuration items. The items can be:

  • basic - e.g. classes
  • aggregate - e.g. main program
  • derived - e.g. object code

Identify process

Identify process

Aim of identification is to determine what items will be produced and how they will be managed.

2. Version Control

Managing the different versions of all the configuration items.

A version control system typically consists of:

  • A repository - for storing config items
  • Version management functions - to create and track versions, and roll back if necessary
  • A make facility - collect all config items and build

Choice to make is what objects need to be tracked and at what level of granularity. There is a trade-off between computing resources and progress insights when considering this.

All SCM information should be stored in a repository or configuration database.

Note: Tools like CVS may only store information on versions, and not the dependencies between different configuration items.

  • Version - an instance of a model, document, code or other config item which is functionally distinct from other system instances
  • Variant - an instance of a system which is functionally identical but non- functionally distinct from other instances of a system
  • Release - an instance of a system which is distributed to users outside of the development team

Derivation history - a record of changes applied to a config object once it is under version control. It is the sequence of changes, tracked by version numbers, that the config item went through.

Each change in the derivation history should record:

  • change made
  • rationale for the change
  • who made the change
  • when it was implemented

Information about changes can be included as comments in the code. Tools can process the history automatically if a particular style is followed, and insert it into the derivation of a config object in the version control repo.

Method for version tracking:

  • version numbering - e.g. V1.1, V1.2, V2.1a
  • attributes - e.g. Date, Creator, Language, Customer, Status
    • attributes need to be carefully chosen so that they can uniquely identify the version

A combination of both is often used.

3. Change Control

Changes often affect multiple configuration items and multiple people. Some common changes are:

  • Enhancements/additional features
  • Changes to requirements
  • Refinement of existing requirements
  • Changes to technology
  • Changes to design

Part of an overall config management plan is a change management plan.

Three steps to making a change:

  1. Initiate change
  2. Evaluate change
  3. Make change

Factors considered when evaluating a change:

  • Size
  • Complexity
  • CPU and memory impact
  • Cost
  • Test requirements
  • Impact on current work
  • Politics from customers/marketing
  • Is there an alternative?

Important to differentiate the config items that are stable and can be used by others from those that are unstable.

A baseline is an artefact that is stable. That is, it has been formally reviewed and agreed upon, that is now ready for use in future development, and can only be changed through formal change management procedures.

Can have different baselines in a project.

4. Auditing and 5. Status Reporting

Configuration audits assure that what is in the repo is actually consistent and that all changes have been made properly.

Auditing and Status Reporting are common ways to keep track of the status of a repo.

Common questions:

  • Have requested changes been approved and made?
  • Have config objects passed QA?
  • Do attributes of config item match the change?
  • Does each config item have appropriate change logs?

Software PM Tools and Techniques


Project Management (PM) is one of the most dynamic management fields.

PM processes:

  • initiating
  • planning
  • executing
  • monitoring
  • controlling
  • closing

PM efforts used to be plan-driven SW development. Now PM’s use Dynamic Project Management to work with the needs of the team.

Summary about Agile:

  • different philosophy
  • has its own challenges
  • needs continuous user involvement
  • built on agility and creativity of teamwork

PM Tools

Modern PM tools/software can be/provide:

  • real-time workspaces
  • open source
  • integrated with work practices
  • visibility/transparency of work
  • central place for team members

No longer the need to performance manage teams.

Collaboration and Communication Tools

Facilitate team member interaction e.g. Slack, Hipchat, Google docs

Task Management Tools

Include task management features e.g. Asana, Trello, Todoist/Wunderlist

Features of tools are important from a cognitive viewpoint.

Chapter 7. Metrics, Cost and Estimation

A primary responsibility of the project management team is to monitor the execution of the project.

Typically consists of measuring and analysing information:

  1. Estimates on future performance
  2. Comparing estimates to actual performance
  3. Assessing quality of outputs of various activities


4 reasons to measure software:

  1. Characterise - understand projects and establish baselines for comparison
  2. Evaluate - determine status with respect to plans
  3. Predict - estimate future performance by understanding relationships and building models
  4. Improve - identify roadblocks and opportunities to improve quality

Metrics must be used wisely:

  • Provide regular feedback to teams collecting metrics
  • Work with engineers and teams to set clear goals and metrics used
  • Never use metrics to threaten
  • Don’t perceive metrics data as negative, use it to highlight areas for scrutiny
  • Don’t get obsessed with a single metric
  • Use common sense and organisational sensitivity when interpreting data

Types of metrics

  1. Process metrics - across all projects over long period of time for process improvement
  2. Project metrics - collected over single project → check status, track risks, adjust workflow etc
  3. Product metrics - assessing quality of the product

Software Quality Determinants

Quality Determinants

Quality Determinants

Even though product metrics are most important in the end, we cannot measure them initially. It is therefore important to measure aspects of the process, and hence the project, to ensure we deliver a high quality product.

Process, Project and Product Metrics

Process, Project and Product Metrics

Attributes of useful metrics:

  1. Simple and computable
  2. Empirically and intuitively persuasive
  3. Consistent and objective
  4. Consistent units
  5. Programming language independent
  6. Useful for providing feedback

Lines of Code (LOC) is a simple example of a product metric.

  • Physical LOC - actual lines
  • Logical LOC - ignores comment and blank lines (must be tied to a programming language)
Indicators Meaning
LOC < 1,000 Straightforward
1,000 < LOC < 100,000 Medium
LOC >= 100,000 Difficult

Measuring Complexity

  • Cyclomatic complexity - for source code, determines the number of different paths in a program
  • CK metrics suite - suite of measures for OO designs
  • Function point analysis - for requirements models, estimates the amount of functionality of a system

Cyclomatic Complexity

A more accurate way to measure complexity than LOC is how many decision points exist in a program. Cyclomatic complexity is the most common of such methods.

Cyclomatic complexity is the upper bound of the number of linearly independent paths in the code of a program.

Convert to control flow graph (directed graph representing all paths that can be executed)

max_linear_paths = e - n + 2


e = edges n = nodes


max_linear_paths = D + 1


D = decision points

Control Flow graph

Control Flow graph

int gcd(int x, int y) {
    while (x != y) {
        if (x > y) {
            x = x - y;
        } else {
            y = y - x;
    return x;
cyclomatic_complexity = max_linear_paths

Using edges and nodes:

cyclomatic_complexity = e - n + 2
                      = 6 - 5 + 2
                      = 3  

Using decision points:

cyclomatic_complexity = D + 1
                      = 2 + 1
                      = 3

Applications of Cyclomatic complexity

  1. Measuring complexity of a program - estimate of how much a developer needs to track and examine
  2. Testing - get an idea of the amount of test cases
  3. Defect Estimation - more likely to contain faults → spend more time testing

CK Metrics Suite

Measure of the complexity of object-oriented design.

Metrics based around classes:

  • how large they are
  • how methods within class interact
  • how classes interact

Six metrics:

  1. Weighted Methods per Class (WMC)

    Requires a measure of the complexity of each method in a class. If source code isn’t available it must be estimated.

    WMC = summation of complexity measure for each method in class

  2. Depth of Inheritance Tree (DIT)

    Maximum length path from root class to a leaf node.

    e.g. For CC2.1.1, DIT = 4

  3. Number Of Children (NOC)

    Number of direct children of a class.


    • C = 2
    • C1 = 1
    • C2 = 3
    • C2.1 = 1
  4. Response For a Class (RFC)

    Response set for a class is the number of method calls made by that method if invoked.

    RFC = summation of response sets in class

  5. Coupling between object classes (CBO)

    Number of relationships a class has with other classes other than via inheritance (such as aggregation and association). Class A is coupled to Class B if either of them “act upon” each other (bi-directional).

    Higher CBO means less likely to be reusable due to dependencies involved. Also makes maintenance and testing more difficult due to number of interactions occurring between classes.

  6. Lack of cohesion in methods (LCOM)

    Each method accesses zero or more attributes of the class.

    LCOM is the number of pairs of methods whose similarity is zero, minus the number of pairs of methods whose similarity is not zero.

    m1 = {v1,v2}

    m2 = {v2,v3}

    m3 = {v4}

    m1 intersection m2 = {v2}

    m1 intersection m3 = {}

    m2 intersection m3 = {}

    zero_similarity_pairs = 2

    nonzero_similarity_pairs = 1

     LCOM = zero_similarity_pairs - nonzero_similarity_pairs
          = 2 - 1
          = 1`

All values in this metric suite should be kept as low as possible.


For measuring requirements complexity.

A function point is a unit of measurement that is used to express the amount of functionality in a software system, as seen by the user.

A higher number of function points indicates more functionality, and empirical evidence demonstrates that there is a positive correlation between function points and the complexity of the system.

Function points typically used to:

  • estimate cost and effort required to design, code and test a software system
  • predict the number of errors
  • predict the number of components in software

To calculate the function points of a system:

  1. Categorise each of the functional requirements in the user requirements into one of 5 categories
  2. Weight complexity of these categories for particular application
  3. Calculate total count from categories and their complexity
  4. Calculate value adjustment factors, which weight non-functional requirements into estimate
  5. Calculate total function points count using a formula

Note: Detailed Function Point calculations NOT examinable

Step 1 - Categorising Functions

Collect data from requirements spec. Categories:

  1. Internal logic file (ILF) - e.g. table in a relational database
  2. External interface file (EIF) - e.g. data on third party server
  3. External Input (EI) - e.g. data field populated by a user
  4. External Outputs (EO) - e.g. screens, error messages
  5. External inquiries (EI) - e.g. similar to EO but doesn’t require any derived data from system

1&2 → data functions

3-5 → transaction functions

Step 2 - Assign Complexity Value

Complexities are ranked as either simple, average, or complex. The complexity value represents all functions of the category.

To calculate complexity value, we can count number of:

  • Data Element Types (DET) - unique visible data field
  • Record Element Types (RET) - subgroup/child of a DET
  • File Type References (FTR) - file referenced by a transaction

All categories are DETs.

  • Data functions → RETs
  • Transaction functions → FTRs

Use DET vs RET and DET vs FTR tables to work out complexity value (weight factor).

Step 3 - Calculate Count Total

for (category : category_list) {
  count_total += category.count * category.weight_factor

Where category_list is list of ILF, EIF, EI … etc.

Step 4 - Calculate Value Adjustment Factors (VAF)

VAFs are the non-functional characteristics of the system. There are 14 questions, answered on a scale of 0 to 5. Here are some examples:

  • Does system require reliable backup and recovery?
  • Are there distributed processing functions?
  • Is the internal processing complex?

The VAF is the summation of all 14 answers.

Step 5 - Calculating the Function Point Count

Apply formula.

Cost and Effort Estimation Models

Parametric cost effort estimates provide the most accurate forecasts.

Constructive Cost Model II (COCOMO II) is a hierarchical model for cost estimation that uses parametric methods.

Pressman’s four solutions to problem of estimating cost:

  1. Delay estimation as much as possible
  2. Base estimates on data from previous projects
  3. Analyse the system decomposing into smaller parts
  4. Use empirically-based estimation models
  • These solutions should not be considered in isolation.
  • Estimates should constantly be revisited.

Biggest cost in software projects is usually staff and effort required (and often the hardest to estimate). Large focus, therefore, on effort estimation.

Expert Guesstimation

Effort can be estimated by experts.

Technique 1

Poll several experts and ask for pessimistic (p), optimisitic (o) and most likely (m) estimates.

effort = (p + 4*m + o) / 6

Technique 2 (Delphi technique)

Poll several experts, ask for best judgement, calculate average effort. Experts then see results and get to discuss/revise their estimate if required, until no further revisions are requested.

Parametric estimation

Estimation models, based on empirical evidence, that relate effort to several factors that influence effort e.g. size of project, experience of development team, type of system being developed etc.

Size of project is often the most influential factor.

E = a + b * (S^c) * m * X_vector


S = size of system

b and c = coefficients

X_vector = remaining cost factors

m = adjustment multiplier

Well known LOC-based models:

  • Walston-Felix model
  • Bailey-Basili model
  • COCOMO-81 model
  • Doty Model

Well known function-point-based methods:

  • Albrecht and Gaffney model
  • Kemerer model
  • Small project regression model

LOC models are polynomial, but function-point models are often linear.

COCOMO II for cost and effort estimation

COCOMO is a set of parametric cost and effort estimation models.

COCOMO II is a hierarchy of empirical models based on project experience.

Models are based on regression analysis.

Model Hierarchy

Each model is applied at a different stage of development:

  1. Application Composition model - early, before requirements
    • requires applications points
  2. Early Design Stage model - when requirements are stable, basic software architecture available
    • requires function points
  3. Post-Architectural Stage model - during detailed design, implementation and test phases
    • requires function points or LOC

Note: Detailed COCOMO calculations NOT examinable.

Application Composition Model

  1. Identify basic application points of system
    • screens
    • reports
    • program components
  2. Classify complexity
    • simple
    • medium
    • difficult
  3. Calculate application points

Get weight from table (this is the complexity weight).

for (app_type : app_type_list) {
  count_total =+ app_type.count * app_type.weight
  1. Estimate productivity rate

Application points the team can do per person-month.

Use table with category ratings and productivity scores.

  1. Calculate Effort



E = effort

NAP = number of applications points

PROD = productivity rate

Early Design and Post-Architectural models

Parametric models that are more sophisticated and accurate than the application composition model.

  1. Estimate size - use function points and a table to estimate logical lines of code
  2. Estimate scale - use equation
  3. Estimate cost driver influence
    • seven factors on a 6 point scale (very low to extra high)
  4. Calculate effort
  5. Calculate time and personnel

Effort Estimation in Agile Development

Agile projects are different because of the lack of detailed specification up front, and the short duration of iterations.

Two broad categories of effort estimation in agile:

  1. Estimate size/complexity of user stories by comparison with other user stories (our focus)
  2. Estimate size/effort within a period of time (time-box)

Estimating by comparison

Two key concepts:

  1. Story points - relative measure of the size of a user story
  2. Velocity - measure of productivity of team, number of story points delivered in a specified time period

Estimation process:

  1. Divide system up into stories
  2. Estimate number of story points, basing off previous stories
  3. Use team’s velocity from previous stories to estimate delivery time of project
  4. Measure actual velocity taken by team
  5. Use velocity to re-estimate time to deliver product

Estimating Story Points

Story points similar to function points in that they measure the size and complexity of the system, but they are relative measures not units of measure. The size of a story point is only relative to the size of other story points.


  • Estimate by analogy
  • Decompose a story
  • Use the right units
  • “Doer” does the estimation
  • Use group-based estimates

“Ideal days” aren’t often used as stakeholders can confuse them with actual days. Actual days is always longer than ideal days.

Measuring Velocity

Velocity is the number of story points completed over a time period:

V = SP / T_i


V = velocity SP = story points T_i = time period


  1. Using historical data
  2. Using data from previous iterations

Estimate delivery time:

T = SPT / V


T = time taken SPT = story points in total V = velocity

As the velocity changes, the estimated delivery time will change.

Planning Poker

Most commonly used estimation method in agile (modification of Delphi process).

Belief that group planning is more accurate due to a wider range of expertise being drawn on.

Planning poker alleviates two problems:

  • not everyone being involved
  • estimating user stories taking too long


  1. Players get cards representing story points (in Fibonacci numbers)
  2. Moderator reads the user story
  3. Each player decides how much the story is worth (no discussion or comments)
  4. All players simultaneously show cards
  5. If estimates aren’t within a certain range of each other they are discussed (discussion time-boxed to a few minutes)
  6. If estimates are similar, average taken

Anchoring is the concept of developers opinions being biased by previous cards of an influential developer. e.g. an experienced developer weights something very lowly so in the next round other developers are biased to weight the story lowly as well.

To prevent anchoring:

  1. Moderator collects cards and only the average is discussed
  2. Average taken and not discussed (encourages players to re think their estimates based on group average)

Planning poker should not be used for estimating large-scale applications or large user stories.

Works well under following conditions:

  • team is diverse
  • story is suitable to multiple iterations
  • when team estimating is implementing
  • team have worked together on projects before

Chapter 8: Risk Management

High level of uncertainty in software projects as they often haven’t been done before. Planning for and managing this uncertainty is called risk management.

Better risk management results in less required resources and less contingency.

Risk, Uncertainty and Risk Exposure

A risk is a possible future event that has some expected impact.

The definition of risk is not restricted to negative events, as a positive risk that isn’t planned for (e.g. an early finish) will mean failing to take advantage of the upside (e.g. starting other projects).


Uncertaintyprobability < 1 of event happening, doesn’t consider impact

Riskprobability < 1 of event happening but considers impact

Problem → a risk with a probability = 1

All risk is uncertainty, but not all uncertainty is risk.

Determining which events are risks takes analysis of the following three properties:

  1. Probability
  2. Impact
  3. Degree of control

Risk_exposure = probability x impact


0 < probability < 1

and impact has a finite grade, such as

1 < impact < 5

[A monetary scale (cost) could also be used]

Risk-free event → probability or impact = 0

Some common risks:

  • timing of delivery
  • third-party application reliability
  • inconsistent behaviour of applications

Risk Management Activities

A structured approach to dealing with risk within a project as opposed to an ad-hoc approach.

Two broad categories of risk management:

  1. Risk assessment - identify, analyse, prioritise
  2. Risk control - remove, mitigate, accept, minimise

Risk assessment and control broken down further

Risk assessment and control broken down further

Risk management as an iterative and ongoing process

Risk management as an iterative and ongoing process

  • Generic risks → occur in all projects
  • Specific risks → only relevant to particular projects

Job of a risk manager is to control the specific risks. The software development life cycle models implicitly manage the generic risks.

Risk Assessment

Identifying, analysing and prioritising risks.


Techniques for risk identification:

  • Pondering - pencil and paper approach, sit and think
  • Interviews/questionnaires - from domain experts
  • Brainstorming - 6-12 experts generate possible risks, “crazy ideas” encouraged
  • Checklists - used as trigger points for further thought

Risks are encouraged to be viewed as positive, so people don’t discount/deny them → “removing the blinkers”


Structured process to identify hazards/risks. Applies a set of guide words to a set of parameters or the process. Guide words represent a deviation to the design intent of the parameter.

Example guide words:

  • NOT
  • MORE
  • LESS
  • LATE


  • Estimate risk probability
  • Estimate risk impact
  • Identify root cause

Past projects and expert judgement are highly useful in estimating probability and impact.

Root cause can often be identified systematically by working backwards.


Calculate risk exposure and order risks.

Classified and placed in a risk register/log.

Risk Control

General techniques for risk reduction:

  1. Avoid
  2. Mitigate
  3. Transfer
  4. Accept

Risk control can be:

  • reactive - waiting for risk to happen then dealing with it
  • proactive - identify situations to arise from risk and how to deal with each

Risk reduction leverage is used to assess the effectiveness of risk control.

RR_leverage = (Initial_RE - residual_RE) / cost_of_RR


RR = risk reduction

RE = risk exposure

If RR_leverage < 1, not cost effective enough to implement.

Regression Testing

Regression testing is the process of running all system tests after a change, even if those tests are unrelated to the change.

Crisis Management

Crisis management is a process for dealing with an unpredicted event that has a negative impact.

Differences from risk management:

  1. positive impacts not considered
  2. event is unpredicted

Risk can become a crisis when the plan for dealing with that risk fails.

Chapter 9: Quality Assurance

Quality must be built into the software from the beginning → quality assurance.

Quality standards depend on the role of the user. Roles/perspectives:

  • End-user perspective
    • judged through interaction
    • fit for purpose
    • reliable
    • reasonable performance
    • easy to learn
  • Developer perspective
    • number of faults
    • ease of modifying system
    • ease of testing
    • ease of understanding design
    • re-usability of components
    • resource usage

Ultimately, the end-user perspective matters. However, developers perspective is important as it allows less mistakes and better development, and hence better end-user quality.

Monitoring the software processes quality can be a proxy for monitoring product quality.

Quality Models

Software engineers use quality models that decompose concept of quality into measurable attributes. This attempts to remove biases from quality assurance.

For a system to have quality, it should:

  • satisfy explicit functional and non-functional requirements
    • correct, complete and consistent
  • adhere to internal/external standards
  • conform to implicit quality requirements
    • performance, reliability, usability, safety, security etc

A quality model presents a standardised set of measurable attributes that can be used to judge the quality of a system.

Quality model example

Quality model example

The Software Quality Dilemma

  • High quality software takes time/money
  • Low quality software costs money because of lack of purchase and high maintenance costs

Must produce software that is “good enough” (different for each project). Have to weigh up costs of quality assurance measures against cost of potential risk.

Cost of quality

Software is more costly to fix later on in development.

Quality assurance is the monitoring and evaluation of the various aspects of project, service or facility to ensure that standards of quality are being met.

Aim is to provide a high level of confidence that program meets the needs for which it was written.

Different to quality control. That has the specific purpose of testing a product before release.

An important aspect of engineering, opposed to just development, is the ability to exert control over the level of assurance of a project.

Product vs Process

If products are lacking in a particular attribute, we can’t always just “hire better developers” to fix it. We need to change the process.

The quality of the product depends on the quality of the process.

A team must put more effort into quality attributes that are most important. To do this:

  • Decide on targets for process measures
  • Decide on targets for product measures

Methods for assuring quality:

  • Technical reviews
  • Audits
  • Testing and Measurement

Technical Reviews

Type of peer review. A peer (3rd party) is more likely to notice faults.


  • Can be performed on any software artefacts
  • Earlier detection of problems
  • Less likely to make mistakes fixing review faults than fixing testing faults
  • Fault detection rate is high for reviews
  • Finds the actual fault rather than just indicating there is one

Informal Peer Reviews → a “desk check”, can use a checklist of questions to trigger more in-depth review

Formal Reviews → meeting of group of project stakeholders with purpose of improving quality of a software artefact. People want to “feed their ego” and find all the defects.

Review meetings:

  1. 3-5 people
  2. <90 mins
  3. Have a leader, author, reviewer(s), recorder

Walkthroughs and Inspections very similar → can be treated the same. Only differ in:

  • role of moderator
  • preparation expected
  • action taken on defects

Tips for a good review:

  1. Constructive criticism
  2. Stick to agenda
  3. Minimise discussion
  4. Allocate time for reviews

Review Metrics

Err = total number of errors found

E f f ort = total effort - e.g. number of pages

S = size of artefact - e.g. number of class diagrams

Error_density = Err / S


Rate_of_error_detection = Err / E f f ort

Software Audits

Not looking for defects, but checking whether an artefact complies with a specified standard/process.

Authors aren’t involved, and audit often performed by external third party.


  1. Product audits - e.g. check a software module complies with a coding standard
  2. Process audits - e.g. checking repository logs to ensure team is following proper commit process

Process Improvement

Capability Maturity Model (CMM) aims to provide an assessment of the process maturity applied by organisations. Provides a 1-5 rating scale:

  1. Initial
  2. Repeatable
  3. Defined
  4. Managed
  5. Optimised

More recently there has been the Capability Maturity Model Integration (CMMI) developed, breaking organisations into process areas to analyse different levels of maturity over different areas.