Tracking key performance indicators has already been widely recognized as critical to improving projects, but knowing what data to gather can impact whether contractors fully benefit from the information.
Data gathering and analytics allow construction professionals to track metrics like never before, but to avoid drowning in all this new data, practitioners need to track the metrics that matter. So which are they?
Most Frequently Used and Valued
Metrics by Contractors such as RFIs, change orders, schedule, costs, errors and omissions, and safety rank among the most tracked and most valued metrics. A study of key performance indicators (KPIs) for contractors, conducted by Dodge Data & Analytics in 2018, revealed the following:
- Nearly 90% of general contractor/construction manager respondents routinely log RFIs on a majority of projects.
- Nearly 70% document the majority of change orders and evaluate their schedule impact.
- 60% frequently capture errors, omissions and constructibility issues in construction documents.
- Over half make regular use of software to help manage safety and inspections.
Metrics Sought by Owners
Anecdotally, many owners would welcome data from their peers on the number or dollar value of projects managed per person, and salaries paid, but the Construction Owners Association of America’s (COAA) general counsel warns that that type of sharing could run afoul of antitrust regulations.
Other kinds of data that could be usefully tracked and safely shared, suggests Howie Ferguson, COAA’s executive director, include multi-project comparisons of predicted versus actual duration of design and construction phases. Such data would update a 2003 study by the University of Texas of its own projects, which found that its predictions of construction duration were, on average, accurate within 3%, but its predictions of design phase duration underestimated by, on average, 88%.
A metric that could help owners avoid duplication of effort is the percentage of design feedback that a project team fails to address in successive deliverables—and which therefore has to be repeated. “That metric could tell me whether I wanted to work with that team again,” says Ferguson.
Some owners track requests for information (RFIs) as an indicator of a poor set of design documents, but in Ferguson’s experience, that is not productive. “RFI data could be an indicator of something to look into,” he says, “but raw quantity doesn’t tell you anything.” An RFI may indicate a gap in the documents, but it could just as easily indicate an inferior contractor asking unnecessary questions, or a capable one taking proper care on a complex project.
Lagging Versus Leading Indicators
Asking what is behind a metric and why it matters constitutes the first step toward identifying indicators that go beyond describing to actually predicting outcomes—especially successful outcomes. “Too often we’re doing failure analysis based on the root causes of a problem,” says Sue Klawans, a senior construction executive and Lean management consultant. “But what if we looked at the root causes of success?”
As a starting point, Klawans refers to the concept of visual management, a crucial element in the Lean paradigm and a rubric for monitoring and communicating information. Visual management enables key data to register at a glance, fostering both micro-advancement (what we’re doing onsite for the week becomes more clear) and macro-advancement (our project is more profitable, and we’re achieving agreed deliverables). It expresses a team’s shared understanding of what to measure and why. And it enables the team to see where a given metric stands relative to a shared goal, so the information can inform action.
“People think that the things they want to measure are whether a project was on budget, or on schedule, or how many change orders it had,” says Klawans, “but when you really look at visual management systems, you begin to understand that all of those things are lagging indicators. They’re consequences.” The leading indicators, the ones that predict a consequence, are what Klawans calls “metrics that matter.”
Identifying predictors of success poses more of a challenge than tracking lagging indicators, especially as the definition of success can vary from project to project. However, in a 2017 session of the Associated General Contractors of America (AGC)’s Public/Private Industry Advisory Council (a group set up to promote dialogue with public and private owners involved in facility construction), some 40 to 50 owners, architects, general contractors and trade contractors took a crack at it, with subsequent sessions at COAA conferences expanding the effort.
Team Health Is a Leading Metric
At the outset, most participants reported measuring lagging indicators, often without ploughing the knowledge gained back into improving the next project. However, in the course of reflecting on what made some of their projects successful and others just average, the group reached what Klawans calls an “aha!” moment. “The people in the room, practitioners who run projects for their universities and their hospitals, were arriving at a shared conclusion that it’s the team’s functionality that drives success,” she says. “Team health is the leading indicator.”
The COAA session defined team health as a function of the group developing a culture, process and capability that sustains it through the project. Good team health means you don’t have to stack a project team with A-plus players: competent people with the right mind-set and a strong framework can achieve extraordinary results.
A pair of quantitative studies conducted by Dodge Data & Analytics for the Lean Construction Institute supports the group’s finding. In each study, one conducted with owners and one with architects and designers, a strong team culture was a common feature on the best projects that respondents had worked on and far less common on typical projects.
But if team health is the metric that matters, how do you track it and integrate it into decision- making?
To make a start, the COAA session developed options for participants to pilot. For example, would it be effective for a team to gather together its obstacles, assess their relative significance and the team’s capability to clear them, and then monitor progress monthly? Or would it be useful to conduct a short monthly survey focused on team health and review observations at the next team meeting, with sample questions perhaps including:
- Are all project team members actively and meaningfully engaged in the project?
- From my own point of view, am I modeling behaviors to ensure a safe site? Am I seeing all the others on the team model behaviors to ensure a safe site? Is there a difference between those two answers, and why?
- Do we have open and transparent dialogue? Or is information being withheld?
- Do I enjoy being part of this team, and am I proud of what we’re creating?
However project teams try to get at the metric of team health, “it’s relatively new in the industry for an owner to take a serious look at the intangibles of a project, and specifically at the morale and health of its project team,” says Ferguson. “But in the last two to four years, people have started coming around to the idea that, even with all this technology, it’s still people who make projects succeed. Those people, their buy-in, their ownership of the process, that actually does matter.”
Originally appeared in Dodge Data and Analytics Smart Market Report (2019). Reprinted with permission.