Successful O&M Assessment in Developing Country: Part 1

How to Complete a Successful O&M Assessment in a Developing Country: Part 1

by Andrew Markle, PE and Sanjeev Jolly, PE – Senior Engineers, NAES Engineering Services

In the global marketplace, decision-makers must transcend borders and cultural differences. In this discussion, we draw on our experience in applying management tools commonly used in industrialized countries, such as key performance indicators (KPIs) and benchmarking, to the needs of power generators in less developed countries. Without some adaptation, expected outcomes can easily get ‘lost in translation.’

To that end, NAES has adapted its O&M assessment methodology by molding the power plant’s workflow to better align it with international expectations, while still addressing local challenges to increase operating efficiency. We’ve found it essential to lay additional groundwork for an O&M assessment in a developing country to ensure that NAES and its client are speaking the same language – both literally and figuratively.

Initial data collection, for example, will require extra effort if a plant lacks electronic documentation, standardized nomenclature and a common language with the assessing party. Safety, operations, maintenance and other KPIs need to be benchmarked within meaningful contexts to set the stage for continuous improvement of O&M performance.

We’ll divide this discussion into two parts: Part 1 will focus on the NAES O&M assessment process and how we’ve adapted it for use in developing nations; Part 2 (in the next Technically Speaking) will address the social challenges of working across cultures and the ‘cultural homework’ you need to do to ensure the collaboration is successful. We deem it a success once a system has been put in place that compares meaningful KPIs against those of competitors and enables a relatively isolated power asset to establish a rhythm of continuous improvement.

The NAES O&M Assessment Process The goal of a NAES O&M assessment is to evaluate analytically where the plant’s operational efficiency currently stands and where it can improve; and to develop a program of continuous improvement. This sets in motion an ongoing process that uses KPIs to establish a baseline and then monitor progress. KPIs are metrics that allow a plant to focus on issues critical to its safe and profitable operation. We break them down into four categories – Operating, Maintenance, Safety/ Environmental, and Financial – as represented by the following examples:

  • Operating KPI Examples

Net Capacity Factor (%)

Startup Reliability Factor (%)

Average Employee Turnover (%)

  • Maintenance KPI Examples
    Preventive Maintenance (hrs)
    PM Completed On Time (%)
    Inventory Turnover (%)
  • Safety/Environmental KPI Examples
    OSHA Recordables (each)

Open Safety Work Orders (each)
Environmental Excursions (each)

Technically Speaking_NAES
  • Financial KPI Examples

Fixed Operational Costs (MM US$/yr))

Non-Fuel Variable Operational Costs (US$/MWh)

Cost of Fuel (US$/MWh)

NAES uses this same methodology to assess plants in less developed countries, but there are additional challenges that must be addressed.

Site Visit and Data Collection

During the data collection phase, which starts remotely about two weeks prior to the site visit, we often find that the plant staff either do not have electronic documentation available, or their language differs from ours, or the requested documentation goes by different names than we use. To minimize confusion, we’ve found that a clear checklist provided to the assessee of requested documentation and KPIs works best. The list should be comprehensive and prioritized, clearly indicating the file name of each requested document and identifying who will retrieve the document by what date. To further avoid confusion, we include a brief description of the information contained in the requested document.

Gap Analysis and Synthesis

After collecting data through numerous channels during the site visit – via data collection, site inspections, staff interviews and a kick-off meeting) – we begin the work of analyzing and contextualizing the data.

We start with the gap analysis: a running list of observed points of potential improvement made by all participants (the assessor’s and assessee’s project teams). We view this as a brainstorming session, withholding judgment of ideas and their relative value. Ideas can be data-driven, pulled from casual observation or derived from comparison of the plant’s KPIs with those of similar facilities.

Note: Cultural misunderstandings can become problematic during this session. In Part 2, we’ll discuss in detail how to mitigate these by building your cultural awareness.

Once the core project group has completed the gap analysis, we share the ideas generated with subject matter experts (SMEs) throughout NAES and our networks to obtain feedback. By sharing ideas across a broad range of SMEs, we hopefully overcome one of the main problems faced by power generators in developing countries – market isolation. While a huge population may inhabit the country in question, it will likely contain significantly less power infrastructure.

For example, Nigeria has 174 million citizens – about half the U.S. population – yet its installed capacity is an order of magnitude less than the United States’ 1,070 GW. This creates islanded markets that can be isolated by distance, culture and/or language. Consolidating ideas and feedback into a concise gap analysis list reduces this islanding effect.

Prioritized

Recommendations We generally do some additional investigation to test the validity and relative impact of the ideas and the SMEs’ feedback. We then present the commented gap analysis list to the core project group for prioritization at a face-to-face meeting. After allowing time for discussion Sanjeev Jolly, PE and consensus, we prioritize actions into urgent and important (Category 1); not urgent but still important (Category 2); and not urgent or important (Category 3). Again, cultural differences can make this process challenging. At the end of the meeting, we come away with a prioritized action items list that includes delegated individuals and agreed-upon implementation dates.

Benchmarking

Once the typical safety, operational, maintenance, and financial KPIs have been collected, baselined and validated, these values are benchmarked against comparable plants. If the assessor has access to data from a large fleet, all types of power plant types and real-time data are available. If factors need to be adjusted in particular regions, the factors can usually be adjusted through known market information to complete a meaningful comparison. Once the benchmarking is completed, observations may be made that prove or disprove ideas presented in the gap analysis or provide further insights, hence, another review of the prioritized gap analysis list /action plan is appropriate. At this point the action plan is ready to be implemented.

Continuous Improvement Program

Now that the action plan has been implemented and the plant’s KPIs have been baselined, a continuous improvement program can be put in place. We typically advise that the agreed-upon actions be given six months to produce measurable results. After this time, the KPIs are evaluated and trends identified. We then analyze the changes in KPIs in a group meeting to determine which actions are working and which are not. The action-item list is updated accordingly and realistic goals set for each KPI. If we find KPIs currently being monitored that are not providing meaningful results, we recommend removing them and adding more relevant ones.

From here on, the group reconvenes periodically to review the KPIs relative to goals and utility. They update the action-item list as actions are completed and new ideas surface to meet the ever-changing challenges of operating a power plant.