Skip to main content

Guide to implementing patient-reported experience measures (PREMs) into practice

We have information to guide health professionals and organisations through the three stages of implementing patient-reported experience measures (PREMs) into their practice.

A guide to implementing PREMs into practice

There are three stages to consider when implementing PREMs into practice. Consumers should be actively involved in every stage and different teams of health professionals may take the lead at different points. 

We have developed a guide to help organisations through the three stages. The guide is relevant for:

  • any organisation or department that funds, manages or provides health services to patients, from government to a local unit level, and
  • primary care services.
     

Stage 1: Assessing the context

To ensure the selected PREM is meaningful, you need to think carefully about how you want to use the survey and why.

You also need to consider how the PREM can be best used in your organisation, given your existing patient experience work and contextual constraints and enablers.

Stage 1 focuses on:

  • specifying why and how you want to use the selected PREM in your organisation
  • mapping how the PREM fits with your existing patient experience work and survey tools
  • writing a business case for your organisation’s implementation of the PREM, and
  • writing a stakeholder engagement plan to get ‘buy in’ from different groups. 

Audit of current activity

It could be helpful to start with an audit of your organisation’s current activities in collecting patient feedback. 

  • How do these existing activities currently feed into your work on, for example, quality improvement, patient-centred care or accreditation? How would the PREM fit into this picture? What gaps can it fill and what processes or reports can it inform? How can it complement or feed into other activities?
  • If you already use a patient experience survey, what aspects of patients’ experiences do you currently measure, and how do these compare to the selected PREM’s concepts? Will the PREM be used as a replacement for existing tools or as an add-on module?
     

The context for PREM implementation

Organisations implementing a new PREM may or may not have existing survey programs and may or may not have control over whether the selected PREM is administered to their patients.

This affects the type of rationale you develop for using the PREM in your organisation.

Three example scenarios and how these might affect your business case for adopting the selected PREM are:

  • If you have been told you must implement the PREM in your organisation, or that the PREM will be administered to your patients by another organisation or authority
    • note the rationale given to you for choosing the PREM by your head office or regional authority and think about what else you can get out of the implementation for your own organisation’s benefit, so that the resulting data does not just disappear ‘up the line’ without you making meaningful use of it locally
  • If you are already using a patient experience survey in your organisation and switching to a new PREM or adding the PREM to your existing survey
    • consider the implications of the change (cost, concepts measured, change in mode of administration, change in presentation of results)
    • consider how this will affect current reporting of patient experience and what changes will flow through from ‘board to ward’
    • consider how you could use this opportunity to improve how your organisation engages with consumers and collects and uses patient experience information. What benefits or features does the PREM have compared to your old survey and how can you take advantage of these?
  • If you do not have an existing patient survey
    • consider how the results of the PREM could fill gaps in your organisation’s knowledge about the quality and person-centeredness of its care, and how the PREM might complement other types of information your organisation collects to monitor and improve quality and safety.

Ultimate outcomes

When defining the ultimate outcomes you want the PREM to have, you might consider how these can align with and contribute to current policy, operational and strategic objectives.

Think about your organisation’s short- and long-term priorities and plans, areas that have previously been identified for improvement, and the priorities of other organisations whose policies affect you (such as governments and head offices).

It is very unlikely the PREM alone can create all these outcomes – a more comprehensive approach to monitoring patient perspectives and viewing this in the context of other safety and quality information is necessary. However, there is evidence that improved patient experience is associated with many of these outcomes.

Outcomes you choose to monitor the impact of the PREM might include:

  • Positive trends in PREM results over time
  • Early identification of concerning patterns in patient safety near misses, unsafe practices and adverse events
  • Improvement in other types of quality and safety outcomes including clinical outcomes
  • Increasing public and patient trust in your organisation (and its reputation)
  • Positive accreditation outcomes for actions related to person-centred care
  • Increased focus by decision-makers on improving the aspects of experience that are important to patients
  • Increased consumer involvement in decision-making at all levels of the organisation
  • Improved disclosure and communication of adverse safety events to patients.
     

Mechanisms to achieve the outcomes

How will you know that you are on the right track to achieving the ultimate outcomes? Monitoring the intermediate outcomes – the mechanisms your organisation puts in place to trigger the ultimate outcomes – will give you a good indication of progress.

Potential mechanisms that might help achieve the desired outcomes in your organisation include:

  • Regularly reporting results and trends to executive, board, staff and consumers
  • Integrating PREM results into executive and board decision-making processes
  • Creating an automatic trigger for action when PREM results fall below a certain level or when specific red flags are raised by the results
  • Integrating PREMs into routine quality improvement processes and Plan-Do-Study-Act (PDSA) cycles
  • Reporting and comparing PREM results across wards or services
  • Using PREM results to regularly update staff
  • Celebrating PREM results in high-performing or improving parts of the service
  • Establishing consumer focus groups to dig deeper into the results.

Constraints

There may be a range of constraints on the routine collection and use of patient experience information in your organisation. These constraints may affect how the selected PREM is implemented, when it is done and who is involved.

It is important that you identify these constraints early in the process so that you can take them into account in planning your PREM implementation.

Potential constraints include:

  • Existing patient experience measurement activity – you will need to see what your organisation does now, to see how the PREM can complement this activity without ‘reinventing the wheel’
  • Human and financial resources constraints – it may be necessary to think creatively about how the additional work will be done and who will do it. Can the PREM be rolled into an existing process? Will external resources be required? Can resources be shared with other organisations?
  • Cultural constraints – there may be resistance in parts of the organisation to the idea of measuring patient experience and/or using this to improve care; these kinds of issues can be addressed with careful consideration of how to gain buy-in and engagement among different stakeholder groups
  • Legislative and contractual requirements (for example, Department of Veterans Affairs or jurisdictional requirements) – see how the PREM fits in with your existing obligations
  • Reporting cycles – make sure timing of PREM collection and reporting fits well with existing reporting cycles (for example, if reports to the board are quarterly, you may wish to use the PREM in the month before the reports, so that the results can be included)
  • Other events and priorities – you may have other events to consider, such as staff training schedules or consumer forum meetings; you will also have to see where patient feedback fits in your organisation’s priorities.
     

Enablers

Enablers are resources or situations that will help to ensure the success of your implementation. Identifying existing resources will enable you to effectively draw on them when implementing the PREM.

Potential enablers include:

  • Executive sponsors
  • Clinical champions
  • Staff training forums
  • A person-centred care strategy
  • Consumer committees and forums
  • Information on consumer preferences and needs
  • Changes to models of care.

Stakeholder needs analysis

A stakeholder needs analysis will identify the stakeholder groups to involve in the implementation and use of the PREM, and explore their expectations, needs and concerns. You can consider this among selected hospital staff, or conduct a consultation with stakeholder groups.

‘Marketing’ the PREM

You may need to ‘market’ the PREM to stakeholders, especially during the initial stages of implementation. The marketing should raise awareness among different groups, encourage acceptance of both the process and the use of PREM information, and manage stakeholder expectations of what the PREM can and can’t do.

Marketing activities and resources may include:

  • Materials (posters, brochures, web and intranet information)
  • Events (awareness-raising events, forums, presentations at staff meetings)
  • Staff education or training
  • Champions (both senior leadership and clinician champions can help to cement the role of the PREM in the organisation).
     

Long-term engagement

Usually, the PREM will be implemented in a whole organisation as part of overall strategic planning. However, if the implementation is initiated below senior management level, it is essential that you seek executive or management approval so that the PREM can contribute to broader goals and strategies and work within existing processes (if possible).

To ensure meaningful use of the PREM to achieve your ultimate outcomes over the longer term, you can consider what engagement processes and regular events you need to put in place. For example:

  • Posting results throughout the organisation (through staff reports or ‘how we are doing boards’)
  • Posting results publicly (through the website or media liaison)
  • Holding friendly competitions between wards, units or departments
  • Establishing a consumer and health professional panel to routinely scrutinise results and recommend and track actions
  • Asking consumers to make presentations to staff on issues highlighted by the PREM.

Stage 2: Implementing PREMs

Once you’ve selected a PREM, there are practical actions and important decisions needed to get the PREM surveys to patients and to get responses back.

This information is relevant for:

  • deciding on the appropriate survey sample, customisation of content, timing and mode of administration
  • defining how you will protect the privacy of patients and conduct the survey in an ethical manner
  • thinking through the logistics of getting the survey to patients and getting it back, and
  • writing an implementation strategy and plan.

Variables affecting your rollout strategy

Several factors will shape how you roll out the PREM in your organisation:

  • Time available:
    The amount of time you have will determine how many stages you include and how much testing and refinement you can do before the PREM goes live.
  • If you plan to change the PREM:
    Changing the questions, the way the survey is delivered, or the types of patients you give it to will increase the amount of pilot testing you need. You need to ensure that the revised survey is still valid and reliable.
  • If you are adding the PREM to an existing survey:
    You may need to test the combined survey to make sure the PREM questions don’t interfere with your current questions and that the redesigned survey works as intended.
  • Your current survey systems:
    The technology and processes you already use for patient experience surveys will affect how much new technology, configuration and staff training you need.
  • Your patient administration system:
    What your system can and cannot do will affect how easily you can automate survey distribution and data analysis.
  • Familiarity with patient experience measurement:
    If your organisation is new to PREMs, you may need more communication and education early on to build understanding and support.
  • Familiarity with using patient feedback in quality improvement:
    If staff are not used to integrating patient perspectives into improvement work, you may need additional training, support and process changes.
  • Resources available:
    The staff and funding you have will affect how much piloting and analysis you can do. Skipping pilot testing may seem quicker, but can lead to bigger problems later.
  • In‑house skills:
    Your team’s capacity and expertise will influence whether you need to bring in external contractors to support the rollout.
     

Staged implementation and piloting

How you introduce the PREM into your organisation is an important strategic decision. It affects how well stakeholders engage with the process and how effectively the results are used.

Instead of rolling out the PREM across your whole organisation at once, you may choose to introduce it in stages, starting with one or more pilot sites. 

Benefits of a staged rollout

  • Learn before scaling up:
    Pilots help you identify what works and what needs improvement, so you can refine the process before expanding it to other areas.
  • Build early support:
    Early wins from pilot sites can help demonstrate the value of the PREM, especially to teams who may be unsure about its benefits.
  • Reduce costs and risks:
    Fixing issues during a pilot is easier and cheaper than correcting them after a full rollout.
  • Get direct feedback from patients:
    Pilot patients can comment on the survey’s length, clarity and relevance, helping you improve the tool before full implementation.
     

Prioritisation of pilots

If you decide on a staged rollout, you will need to consider how you prioritise your pilot samples and analyse your pilot data. You will need to develop a rationale for your staging and choice of pilot populations or sites. Some ways to prioritise samples for piloting include:

  • By ward or department (this may be a good choice if you already have clinical buy-in)
  • By reason for admission
  • By type of admission (day stay or overnight).
     

Evaluation of pilots

Consider how you will evaluate the pilots and feed these findings into the full implementation. You could consider:

  • The implementation process itself
  • Stakeholder engagement
  • Survey response and completion rates.
     

Parallel surveying

If you are planning to replace your existing PREM or add to it, you might consider a parallel survey process. This is where you continue to administer your old survey to some patients and at the same time administer the new survey to other patients. The advantages of this are that:

  • It gives you more time for pilot testing without a break in patient surveying
  • Comparative analysis of the results of the two surveys can yield very useful data to guide your implementation and presentation of results, including
    • how response rates and completion rates compare – if they are higher for the updated or new PREM, this helps prove value; if lower, this can inform adjustments to the mode of administration or format of the survey
    • how overall scores compare between the two surveys – this will help to set appropriate levels of expectation for scores on the new survey.

Defining your eligible patient population

  • What types of patients would you like to send the PREM to? To ensure your PREM results represent your patients and provide a good overview of your services, you should consider the different service areas within your organisation, and the demographics of your patients.
  • Are you interested in all patients, or only a subset? If you are interested in a subset, what are the attributes of the patients you want to be eligible to complete the survey? Depending on your objectives for using the PREM, you could restrict your sample in various ways (or a combination of these), for example:
    • by age
    • by condition
    • by hospital, clinic, department or ward
    • by type of service received
    • by length of stay
  • How will you automate the process of identifying your survey sample? Does your existing patient administration system allow you to filter patients by the attributes you are interested in?
     

Exclusions

You may decide to exclude some patients from your survey population on ethical or pragmatic grounds. These exclusions will depend on your organisation and the type of care it is responsible for, but you might need to consider how to treat patients who:

  • Are having treatment which means they have repeated visits within a short time period (for example, chemotherapy)
  • Are likely to be surveyed using the same or a similar survey by two entities (for example, mental health patients who are given the Your Experience of Service [YES] questionnaire; patients of privately owned public hospital services where a state government and private hospital group might both ask the questions)
  • Are mothers who have experienced a stillbirth
  • Are experiencing temporary or permanent loss of mental capacity
  • Visit the emergency department but are not admitted to the hospital
  • Have diagnosis codes and types of health service which were excluded from those excluded from the PREM development process already tested (only if you want to be able to claim validity and reliability without further testing).
     

Sampling from your eligible patient population

Whether you want to know about the experiences of patients across all demographics and all services or only a subset of these, you still have to decide what proportion of the eligible patient group you would like to survey. The decision will be affected by your chosen mode of administration (which determines the cost of each completed response).

You can either ask for responses from every discharged patient in your chosen population or you can develop a sampling frame to determine how a subset of those patients can be chosen.

Sample stratification

If you are not intending to send the survey to all eligible patients (and have decided to take a sample) you now need to decide whether to stratify the sample. This means dividing your eligible patient population into mutually exclusive groups based on a variable of interest (for example, age, admission type, department of admission) and then sampling from each of these groups. This reduces the problem of random sampling from a population where you may entirely miss respondents with some attribute of the variable you are interested in.

Sample size calculation

If you are not intending to survey all eligible patients, you may wish to draw conclusions from your sample and assert that these apply to your whole eligible population within a margin of error (using confidence intervals). To do this, you will need to set a minimum number of completed responses to include in your analysis, to achieve statistical power.

For PREMs aimed at local quality improvement, the results can be used at local level to improve the service's responsiveness to patients' views, to implement feedback loops to ensure patients receive the appropriate response to any issue raised, and to point to issues for further investigation.

Options for timing of administration

There are several options for the timing of the survey:

Option 1rolling (continuous) administration triggered by discharge (captures all eligible discharges); administration date is determined by elapsed time since the individual’s discharge
Option 2periodic administration at regular intervals (captures all eligible discharges since last administration); administration date is determined by date of previous administration
Option 3periodic cross-sectional administration (captures eligible discharges only from a certain time period rather than all eligible discharges).

 

Options for timing in relation to each patient’s discharge

There are several options for the timing of the survey in relation to patient discharge:

Option 1 – immediately before dischargeThe advantages of this option are that you can use consumer liaison workers or volunteers to administer the survey in person using a tablet device (Computer Assisted Personal Interviewing, or CAPI), which may increase response rates. This option has to be done while the person is waiting to leave, to prevent any fear of it affecting their care. The disadvantage of this option is that the discharge experience will not be reflected in responses, it is labour intensive, and there may be a risk of interviewer effects and social desirability bias in the results.
Option 2 – 24–48 hours after dischargeThe advantages of this option are that the patient’s experience is fresh so it may increase response rates; recall bias may be reduced and there is less risk of the patient confusing this experience with a different one. The disadvantage of this option is that the person may still be too ill to feel like responding. For most types of health services, early surveys tend to get more negative responses than more delayed surveys.
Option 3 – within a week, fortnight or month of dischargeThe advantages of this option are that the patient will likely be less affected by physical pain or discomfort and may have a more realistic reflection on their experience; reflections on discharge and follow-up can also be captured. The disadvantage of this option is that the patient is more likely to confuse this experience with others, especially if asked more than a fortnight after discharge; this may increase recall bias.

Reasons for adapting the PREM

You may wish to adapt the PREM to better suit your own circumstances. Reasons for adapting the PREM may include:

  • Using the PREM in different types of services to those in which it has been tested
  • Using the PREM in different types of patients to those with whom the PREM was tested
  • Ensuring the concepts addressed in the questions are culturally appropriate for diverse populations
  • Keeping some questions from an existing survey to preserve time series data or to reflect organisational priorities
  • Adding supplementary questions around the PREM to target issues of local relevance or to get more detail (for example, qualitative content) or context.
     

Reliability and validity of the PREM

A PREM cannot be considered valid or reliable unless you use the questions in one of the ways they were originally tested. If you want to establish reliability and validity for other modes of administration, ‘nesting’ the questions within other surveys, adapting any question, reordering or interspersing questions, or use with other patient populations or service settings, you will need to do your own field testing and statistical analysis.

Bear in mind that if you adapt a PREM in any way you may need to adhere to license requirements, which may include attributing the original questions to the PREM developer and noting how you have adapted them.

Types of adaptation

Examples of ways you might consider adapting a PREM include:

Including the PREM as a module in a bigger surveyYou may wish to ‘nest’ the PREM as a module within a larger set of patient experience questions. If you are doing this, it is preferable to keep the PREM questions together in the same order, retain the rating scales for each question, and use all of the questions. Some implementers add net promoter scores, an opportunity for free text comment, and questions which are particularly relevant for their organisational priorities or to continue a time series from an old survey.
Using the PREM plus local optionsWithin a group of hospitals or services, the same core set of questions of the selected PREM can be asked across the whole group, with freedom for each hospital or service to add in additional questions that suit their particular local circumstances, quality improvement initiatives or strategic objectives.
Adapting wording or contentAdapting the PREM wording or response options is generally not recommended unless you are going to do a field test and statistical analysis of the new wording for a different patient population.


Any of the above may be temporary adjustments to meet particular organisational priorities or support quality improvement initiatives.

Free text questions

Incorporating an opportunity for the patient to give a free text comment may be valuable. This is usually in the form of a general question about what made the respondent rate their overall experience in the way they did. Advantages of adding such an opportunity are that:

  • The reasons behind a person’s multiple choice PREM responses become easier to interpret and act on
  • Common themes from free text responses can help early detection of emerging patterns of good or poor practice, can be used in training, and can be used to feed compliments back to particular staff
  • Any remedial action required by the health service can be done on a case-by-case basis in a timely way (if the person discloses an incident, near miss or other concern about safety and quality)
  • Free text responses are valuable ‘safety valves’ for patients who feel that the PREM questions did not ‘get at’ their main issues or feedback.

Formats – options and influences on choice of option

There are three main formats that a PREM may be tested on: online, pen and paper, and computer-assisted telephone interview (CATI). In deciding which of these to use, you will need to consider:

  • Patient demographics (for example, are your patients mainly older people who may find pen and paper more useable than online)
  • Resource capacity (for example, do you have the IT resources to conduct online surveys, do you have staff available for data entry of pen and paper responses).

You may find that you will need to implement a combination of formats, particularly if you have a wide range of patients – some may prefer online, some may prefer pen and paper. Also, even if you implement an online or CATI survey, you may need to provide other options as a back-up.

You also need to consider whether you will need external resources to conduct (or help conduct) the survey. External companies can also be contracted to conduct the entire process from survey to reporting, or can just provide one part of the process. 

Meeting patient needs

You will need to decide how to capture the experiences of patients with special needs. Consider (and if necessary test) appropriate modes and/or formats for the survey for:

  • Patients with sensory or cognitive impairment (for example, CATI, proxy respondent)
  • Patients who cannot read (for example, pictorial or audio version, CATI)
  • Non-English speakers
  • Culturally diverse populations who may understand health, illness and health services in a different way.
     

Communicating with patients

If you are to receive adequate numbers of responses to the PREM, you need to ensure that patients are engaged with the process. Communication is the key.

Start early during the patient journey. For example, the patient can be first told about the survey during an episode of care, so that approaches once they have completed their treatment do not come as a surprise. 

Make sure the materials you provide focus on the survey as a way to improve the quality of care. Ensure they are clear, attractive and written in plain English. You will need to consider:

  • In-building promotion such as posters
  • Cover letter or email
  • A patient-facing title for the survey (making it clear what the survey is for)
  • Brief introductory text within the survey
  • Instructions for survey completion.

Initial approach to patients

It is good practice to give patients advance notice that they will later be given a survey about their experience. There are several options for this initial approach to patients, including:

  • Giving written information on admission to enable informed consent to participate
  • Advising as part of discharge paperwork that a survey will be coming
  • Personal visit from a consumer liaison worker or volunteer to explain the survey
  • Letter or email after discharge to forewarn of the survey’s arrival.

As part of this advance notice, and in order for consent to be informed, it is important that it is made clear to patients that:

  • They can ask any questions they have before completing the survey
  • Their responses are or are not anonymous; if not anonymous, who can access their information
  • It is voluntary for them to participate in the survey
  • Their responses will in no way influence the care or treatment they will receive in future
  • They may be contacted to follow up on their responses, and in what circumstances that may happen
  • The information they provide will contribute to improving quality, safety and other patients’ experiences
  • Their information will be kept confidential, stored securely, and aggregated and de-identified for analysis.
     

Consent to participate

Patients should be given the opportunity to provide consent to participate in a PREM program. This consent can be either explicit or implied:

  • To give explicit consent, the patient must be given materials that explain the PREM and what is involved for the patient, or discuss the PREM with a staff member with a checklist of information, and sign a document expressing their understanding and willingness to participate
  • To give implicit consent, the patient must still be provided with materials or talk about the PREM with a staff member; their response to the survey is then taken as ‘implied’ consent that they have agreed to participate.
     

Follow-up procedures

You will need to consider how you will handle follow-up, especially where patients are interested in further dialogue with the hospital. Consider giving an opportunity within the survey for anyone who discloses harmful or unsafe practices to ask to be contacted (or to contact someone at the service). Also consider whether and how you will create a feedback loop to let the patient know how their feedback has resulted in change or to thank them for a compliment. This is easier when there is a free text question.

Sampling logistics

Existing patient administration systems can help you automate identification of your eligible sample and reduce the amount of demographic information you need to ask for in the survey itself. If information is linked to survey responses in this way, this needs to be explicitly described in the information materials that patients are given before they agree to participate in the survey.

De-identification can still be achieved, or identification restricted to a small number of people in particular circumstances, by substituting a patient identifier for name and address in the patient experience data collection.

Surveying logistics

Consider current patient experience surveying methods and whether the new question set can be ‘slotted in’ to existing software or processes without any other changes. Depending on the mode of administration, bespoke surveying software, phone interview resources, or printing and mailing services will need to be arranged in-house or contracted.

Consider who will be responsible for each stage of the surveying process. For example:

  • Who or what will trigger the sending out of a survey?
  • Where will the responses ‘land’ when they are returned (which email account or database or physical location)?
  • Who will clean/analyse/report the returned data?
  • To what extent will the sending out, analysis and reporting of responses be done in a central or distributed way? This is especially relevant if your organisation is responsible for a number of services or a geographic area.

You may also need to consider strategies to improve response rates (for example, multiple formats, reminders, patient self-registration portal).

Reporting logistics

This links strongly to your objectives for using the selected PREM. To achieve these objectives, you will need to consider how you will produce reports, how often, what form they will be in, and who will be the audience.

This also links to the mechanisms that you will use to achieve your objectives. Consider what automated processes will need to be in place to ensure results are regularly integrated into other processes and initiatives in the organisation (for example, integration with complaints management, incident management, quality improvement systems and accreditation).

Stage 3: Using the data

PREMs are recommended as a resource to:

  • prioritise and inform local safety and quality improvement
  • stimulate meaningful discussion with consumers
  • help organisations to keep track of their move towards patient-centred care. 

This information is relevant for:

  • deciding whether and how to set automated triggers for action based on PREM responses
  • deciding on a data analysis strategy
  • thinking through how to present results, who to, and how often, and
  • thinking through how to translate PREM results into improvements in safety and quality of care

Parameters

Consider the parameters that you need to monitor in your organisation to determine whether you are on track to achieve your organisation’s PREM objectives. Parameters may include:

  • Pre-administration attrition rate
    Attrition can lead to sampling bias. The attrition rate is the proportion of eligible patients that cannot be administered the survey because
    • they do not consent to being sent a survey
    • the organisation does not have the required information about the patient to administer the survey (for example, mobile phone number if the survey is administered by text message, or email address if administered by email).
  • Response rate and completion rate
    This is the proportion of eligible patients receiving the survey who respond partially or completely to the survey and return their responses to the surveying organisation. Partial responses lead to response bias in results; they may be received when
    • the survey is administered using pen and paper.
    • the survey administration method does not make all questions compulsory
  • Attrition and response rates for population segments
    You may want to make sure that certain groups within your patient population are adequately represented in the responses you receive or that the relative representation of different groups is reflective of your overall patient population. If so, you could monitor attrition rates and response rates broken down by
    • cultural and linguistic background
    • Aboriginal and/or Torres Strait Islander status
    • age and gender
    • department or specialty of admission
    • reason for attending
  • Number of respondents
    This is the absolute number of people who send a completed response within a defined time frame, as well as numbers of respondents within each population group you would like to present disaggregated data about. Low numbers of responses may affect the analysis process and the ability to report PREM results.
  • Average performance on individual questions
    Consider whether you would like to set and communicate an expectation about how your organisation (or different parts of it) performs on each question. You could do this by
    • setting a goal for the average of all responses returned within a defined time frame, across all questions (this will require setting up a scoring system beforehand – see Step 3.2).
    • setting a goal for the proportion of respondents selecting a particular response(s) (for example, X% select ‘always’; X+15% select ‘always’ or ‘mostly’)
  • Average performance on overall (final) question
    Consider whether you would like to set and communicate an expectation about how your organisation (or different parts of it) performs on the overall (final) question. You could do this by setting a goal for the proportion of respondents selecting a particular response(s) (for example, X% select ‘very good’; X+10% select ‘very good’ or ‘good’).
     

Baseline

If you plan to conduct a staged implementation involving a pilot study, you can use the pilot to determine the baseline measure for each of the above parameters. This helps you to decide what is achievable in your organisation in the short, medium and long term.

Note that after analysis of an initial pilot, there is an opportunity to make changes to your implementation to improve scores on each parameter. If, after a trial of the new method, scores have improved and you decide to take the new method into your full rollout, the baseline scores will now be the results from the trial of the new method. 

Active consideration of PREM results and response ‘red flags’

Translating PREM data into improvement will look at ways in which PREM results can be integrated into workflows to ensure they are actively considered by relevant staff members. You may wish to monitor how often this happens when there are certain types of responses or trends in responses.

Consider whether you will establish red flag triggers for any instance of a particular response. For example, if using AHPEQS link, you may set up an alert for any positive responses to the question ‘I experienced unexpected harm or distress’, or an alert only when this is accompanied by a subsequent response that staff did not discuss it with the patient.

Triggers for action

You can set absolute and/or relative scores on each parameter to define when corrective action will need to be taken:

Triggers based on parameter value

Consider the minimum acceptable value for each parameter (that is, the point at which you think corrective action will need to be taken). For example:

  • when absolute numbers of responses for a given population group pose an unacceptable risk of identifying individuals if reported.
  • when attrition or response rates fall below X% of eligible discharges
Triggers based on changes in parameter value

Consider the minimum acceptable change in value for each parameter over a defined time frame, below which corrective action will need to be taken. This type of trigger will need to take into account ceiling effects (so as not to create a trigger when the values are above a certain level to start with). For example:

  • when improvement for a particular population group falls below X%.
  • when improvement on a specific question which was previously identified as a concern falls below X%
  • when improvement on the value for the overall (final) question falls below X% in a defined time frame.

 

Automating the triggers

Automating triggers will make the process more useful and cost-effective for your organisation. Think about:

  • How you can build automated alerts into your surveying system when trigger thresholds are reached
  • How the trigger alert will be delivered and when
  • Who the alert will be delivered to and who is expected to take action
  • How the appropriate action is coupled with delivery of the alert
  • Whether there is a need for different ‘severities’ of alert – such as a traffic light system where ‘red’ indicates immediate action required and ‘amber’ indicates action required within a particular time frame.
     

Defining what action will be taken for each trigger

Consider how your organisation will respond if a trigger for action occurs. An example is given below, which could be worked through for each of the triggers you have identified.

If this trigger occurs … this action will follow (examples only)
High pre-administration attrition rates

Identify the problem(s), for example:

  • Attrition rate variation related to demographic variables
  • Attrition rate related to lack of consent
  • Attrition rate related to absence of contact details.

Investigate the cause(s) of the problem:

  • By using process mapping
  • By speaking to patients from population groups with high attrition rates
  • By studying the proportion of SMS/email contact details available in the organisation’s patient administration system for different population groups.

Modify the survey process based on your identification and investigation of the problem. For example:

  • Consider offering more than one option for mode of administration
  • Consider how to increase the percentage of patients with SMS or email details recorded
  • Consider timing and burden of consent process.

Monitor the impact of modification in the next round of results.

 

Communicating expectations

Consider who you will consult and communicate with about establishing trigger thresholds, and how you will do this. For example:

  • Staff responsible for sending out the survey and gathering responses will need to be consulted about the trigger thresholds and associated actions for attrition and response rates
  • Staff responsible for analysing and reporting on completed responses will need to be consulted about trigger thresholds for absolute numbers of responses and proportion of completed responses by population segment
  • Frontline professionals will need to be consulted about the trigger thresholds for performance on particular questions and the survey as a whole
  • Senior managers will need to be consulted about appropriate actions to take when trigger thresholds are reached.
     

Reviewing parameters and triggers

Think about how often you will review and modify the continued appropriateness of each trigger threshold and the types of parameters monitored.

Processing and ‘cleaning’ raw data

Depending on your mode(s) of administration, different methods for processing the raw data received from patients will be required. The goal of this processing is to deal with any ambiguities in responses and get the data into a format that can then be read by analysis software.

For example, if automated scanning technology is not being used, paper surveys may require manual data entry into a database and a rule will need to be developed for interpreting unreadable, ambiguous or incomplete responses and reflecting these in the database. A rule would also need to be developed for dealing with anything written by the respondent that does not fit with the response options given (for example, a request to be contacted if that possibility has not been offered).

The cleaning process also can involve de-identification of data – such as replacing a person’s name with a unique identifier or separating their identifying details from their survey responses.

Descriptive statistics

Once the data have been ‘cleaned’, descriptive statistics can be used to summarise and describe the basic characteristics of aggregated survey response data for a cohort of patients. This is the simplest analysis strategy and can be represented in tables, charts and line graphs. Examples of descriptive statistics for a set of PREM responses might be:

  • Frequency of response option choice for each question
  • Trends over time in frequency of response option choice for each question.
     

Scoring methods

Along with simple frequency analysis, you may wish to ‘score’ responses. Scoring (coding) is a way to convert qualitative responses (such as multiple-choice survey responses) into numerical data. Applying a score means placing a numerical value on response options in a way that reflects the desirability of that response.

You do not have to apply a scoring method, but, if you do, it must be applied consistently across all responses received from patients, whatever the mode of administration. No matter what the type of response options (for example, yes/no, ‘always’ to ‘never’), the most desirable response for each question needs to be assigned the highest or lowest score in a consistent way.

Partial credit scoring

Many PREMs have most of the response options offered on a frequency scale (‘always’ to ‘never’). For most of these questions, ‘always’ is the most desirable response. Using a ‘partial credit’ system, the highest score would be applied to that response, and progressively lower scores to ‘mostly’, ‘sometimes’ etc. 

Some advantages of applying partial credit scoring are that:

  • Credit is given proportionately for each option, so that services which get all ‘mostly’ responses will ‘perform’ better in their overall scoring than those services which get all ‘rarely’ responses
  • When the scores on each question are aggregated across a group of patients, an average score can be calculated
  • If you are required to report a composite score for performance on the whole PREM, scoring can help with this process.

Some disadvantages of applying scoring and aggregation of scores are that:

  • Assigning numbers to qualitative categories (such as ‘always’) can lead to misleading representation of data (for example, giving a score of 4 to ‘always’ may imply that this is twice as valuable a situation to the patient as ‘sometimes’ which gets a score of 2)
  • A decision needs to be made about how to treat different types of response option (for example, yes/no vs frequency scale)
  • A decision needs to be made about how to treat missing or ambiguous responses.
     

‘Top box’ and ‘bottom box’ scoring

Where your healthcare service scores highly on most PREM questions, it can be useful to only count ‘top box’ responses to discriminate between excellent and good experiences. This can motivate improvements towards consistent excellence. Results can then be presented in terms of the proportion of all responses for that question which received a top box response (for example, ‘always’). Other responses are not broken down in reporting. Anecdotally, this approach can be more effective in catalysing quality improvement and behaviour change within an organisation than the partial credit system.

If you are using a ‘top box’ system, it can be useful to add analysis of ‘bottom box’ (that is, the least desirable) responses as well. This can help highlight problematic patterns in quality and safety that would require corrective action.

Analysis infrastructure

Think about how you will automate the analysis. This will involve some kind of database to store the raw data, with software to enable simple manual data entry and/or to enable automatic feeding in of returned electronic survey responses. 

Security of stored data

Think about:

  • How you will store data
  • How long you will store data
  • Who will have access to the data, and your arrangements for preventing access by other people
  • Whether all stored data will be de-identified
  • How you will communicate your security arrangements to consumers, to reassure them about the survey processes.

Presentation of data

Factors affecting how you present PREM results include:

  • Who your audience(s) will be and what each of those audiences think is relevant and important
  • Whether you are mostly looking to use the data as a resource for continuous quality and safety improvement or to report performance (it may be both)
  • What level of granularity you want to display (that is, do you want to compare, for example, men vs women, ward 1 vs ward 2, orthopaedics vs general surgery)

Research has shown that there are more and less effective ways to present this type of data to different audiences.

Putting the results in the context of other patient-reported information

Using PREMs is only the starting point for understanding what is working and what is not working for your patients. PREM results must be presented in the context of other information your organisation collects about patients’ perspectives on the safety and quality of their treatment and care. Non-survey information collected from patients may include:

  • Social media mentions
  • Reviews on patientopinion and other health care review websites
  • Manager or executive impromptu conversations with patients
  • Complaints and compliments
  • Consumer presentations to staff meetings and in staff training
  • Focus groups that investigate safety and quality issues in greater depth
  • Staff records of concerns raised by patients and carers
  • Patient-reported incident measures
  • Patient-reported outcome measures.

Using supplementary sources of information and presenting them alongside the PREM results will increase your ability to:

  • Identify reasons behind PREM results for an individual patient or across a patient cohort
  • Confirm or disconfirm anomalies in the PREM data.
     

Putting the results in the context of other safety and quality information

PREM results should also be put into the context of other safety and quality information from your organisation.

The Measurement and Monitoring of Safety Framework  is an example of how an organisation can get a holistic picture of the safety and quality of its health services.

Methods of reporting 

Some example methods of presenting data, and the purposes and audiences for which these methods might be appropriate, are given in the table.

Some examples methods of presenting data

Method of reportingPurpose of reportingMain audienceFrequency of reporting
Live interactive dashboard Integration of patient perspectives into day-to-day decision-making and improvement

Clinicians

Managers

Continuous 
Early identification of emerging safety/quality issuesClinicians
Managers
Continuous
Identification of timely corrective actionClinicians
Managers
Continuous
Static retrospective reports Report organisational performance (actual and trending) to BoardExecutive
Board
Periodic, according to performance reporting cycle
Evidence for accreditationAccrediting agenciesPeriodic, according to performance reporting cycle
Meeting contractual obligationsExecutive
Funding agencies
Periodic, according to contract requirements 
Interactive retrospective reports on organisation’s websiteAccountability and transparency to consumers and the publicConsumers
General public
Media
Periodic
Issue- or population-specific reportsMonitor comparative experiences between population, condition or service groupsClinicians
Managers
Periodic, according to organisational quality improvement strategies
Monitor organisational quality and safety qualitiesClinicians
Managers
Periodic, according to organisational quality improvement strategies

Moving from survey data to practical improvement

The claim that doing patient experience surveys and collecting patient feedback will lead to improvements in services relies on several assumptions. One study identified three assumptions:

  • Assumption 1 – There are valid ways of measuring the healthcare experiences of patients for use in feedback
  • Assumption 2 – Feedback of information about patients’ experiences to service providers (directly or indirectly via public reporting) stimulates improvement efforts within individuals, teams and organisations
  • Assumption 3 – Improvement efforts initiated by organisations, teams or individuals lead to improvement in future patients’ experiences of care.

This step is concerned with assumptions 2 and 3. What makes it more likely that the PREM can be used to achieve meaningful change in services? What are the conditions and actions that need to be put in place to get from a patient filling out the survey to improvements in quality and safety that are noticeable to a patient? Researchers have identified some of the barriers and enablers of the meaningful use of patient experience (and safety and quality) data.

Consumer factors influencing collection of meaningful data

A study points out that there is no likelihood of getting meaningful patient experience data if patients do not see the point of providing it, do not feel empowered to provide it, or if it is difficult for them to fill out the survey. They developed a model of barriers and facilitators for consumers providing their feedback on the safety of the service they used.

-

Source: De Brún et al. (2017)

Factors affecting staff use of data for improvement

Research in the United Kingdom found healthcare staff often find it difficult to make use of patient survey data. The research highlighted the complexity of the process needed to get from the point where staff receive patient feedback to the point where they respond to it by making changes to improve safety and quality. Required conditions are that:

  • Staff must exhibit the belief that ‘listening to patients is a worthwhile exercise’
  • Local teams need ‘adequate autonomy, ownership and resources to enact change’
  • Where high level or inter-departmental support is required, there must be ‘organisational readiness to change’.
-

OR = organisational readiness; SL = structural legitimacy 

Organisational and cultural factors influencing meaningful use of data

literature review examined how large-scale patient experience survey results are used at a local level. They showed that translating these results into effective action and improvement depends on the following organisational and cultural factors:

  • Sufficient resources in terms of knowledge, time and personnel to produce and present good quality data
  • Positive attitude of staff towards patient experience information
  • Effective and tailored presentation of results to staff – not just written feedback
  • Common engagement and understanding from all professions in an organisation
  • High-quality data collection and analysis methods, easily understood results, system to follow up results.

Davies and Cleary also identified the factors affecting the use of patient survey data in quality improvement. Although dated, this study presents a useful summary of barriers and enablers to the translation of survey data into practical change. These included:

  • Organisational barriers
    • competing priorities
    • lack of supporting values for patient-centred care
    • lack of quality improvement infrastructure
  • Organisational promoters
    • developing a culture of patient centredness
    • developing quality improvement structures and skills
    • persistence of quality improvement staff over many years
  • Professional barriers
    • clinical scepticism
    • defensiveness and resistance to change
    • lack of staff selection, training and support
  • Professional promoters
    • clinical leadership
    • selection of staff for their ‘people skills’
    • structured feedback of results to teams or individuals
  • Data-related barriers
    • felt lack of expertise with survey methods
    • lack of timely feedback of results
    • lack of specificity and discrimination
    • uncertainty over effective interventions or rate of changes
    • lack of cost-effectiveness of data collection.
       

Integrated workflows 

When setting up an electronic interface to present PREM results within the organisation (whether this is retrospective or real-time information), consider building in workflows to make it easier to act on the results. For example, when one of the trigger thresholds developed earlier in this stage is reached, particular actions could be prompted and monitored for completion. Another way to integrate quality improvement into the workflow is to provide suggestions for action in different parts of the organisation, even where trigger thresholds are not reached – and even if the results are mostly very good overall.

Analysis and reporting of results can be part of an automated electronic workflow consisting of:

  1. Identification of consenting patients in patient administration systems
  2. Extraction of eligible patient demographics from patient administration systems
  3. Distribution of the survey to eligible patients
  4. Entry of returned responses into database
  5. Quality checking and cleaning of data
  6. Scoring of data
  7. Descriptive statistics generation
  8. Application of any relevant statistical tests
  9. Presentation and reporting of results to different audiences (including live dashboards).
     

Feedback to consumers

It is good (though rare) practice to demonstrate to consumers who have taken the time to fill out a survey that their time has been well spent. Some options for doing this include:

  • Presenting ‘real life’ evidence of previous or ongoing improvements stimulated or informed by patient experience survey results (either when sending the survey or after receiving a completed response)
  • Asking consumers if they would be willing to share their experiences with frontline staff in a staff meeting or training
  • Using consumer focus groups to further investigate a pattern in PREM data – to get to the reasons behind a problem and to help with designing solutions.

Engaging consumers as partners with frontline staff in reviewing PREM results in workshop environments to design improvements is a way to take the feedback and learning loop to a more meaningful level. Resources for experience-based co-design are available from the Point of Care Foundation

Communities of practice

Setting up a regular forum for interested consumers, staff, managers and the executive to review PREM results together is a way to ensure that all stakeholders collaborate to achieve patient-focused safety and quality improvement. This could be a virtual or face-to-face community of practice inviting consumer and frontline staff input to identify, design, record and evaluate changes to processes, structures and practices. You can find an example of a data-driven collaborative learning model, using a different type of healthcare data.

Last updated: 13 March 2026