References

Bullivant JRN Benchmarking in the UK National Health Service. Int J Hlth Care Qual Assur. 1996; 9:(2)9-14
Carey RG, Seibert JH. A Patient Survey System to Measure Quality Improvement: Questionnaire Validity and Reliability. Med Care. 1993; 31:(9)834-845
Busby M, Burke FJT, Matthews R, Cyrta J, Mullins A. The development of a concise questionnaire designed to measure perceived outcomes on issues of greatest importance to patients. Br Dent J. 2012; 212:(8)382-383
Busby M, Burke FJT, Matthews R, Cyrta J, Mullins A. Measuring oral health self perceptions as part of a concise patient survey. Br Dent J. 2012; 213:(12)611-615
Newsome P.: British Dental Association; 2001

Can a concise, benchmarked patient survey help to stimulate improvements in perceived quality of care?

From Volume 41, Issue 9, November 2014 | Pages 816-822

Authors

Mike Busby

MPhil, BDS, LDS RCS, DGDP, FDS RCS(Edin)

Dental Advisor Denplan, Honorary Lecturer in Primary Dental Care, University of Birmingham, St Chad's Queensway, Birmingham B4 6NN, UK

Articles by Mike Busby

FJ Trevor Burke

DDS, MSc, MDS, MGDS, FDS (RCS Edin), FDS RCS (Eng), FCG Dent, FADM,

Articles by FJ Trevor Burke

Roger Matthews

MA, BDS, FDS RCS(Edin), FFGDP(UK)

Chief Dental Officer, Denplan, Denplan Court, Victoria Road, Winchester SO23 7RG, UK

Articles by Roger Matthews

Jakub Cyrta

Data and Management Information Co-ordinator, Denplan, Denplan Court, Victoria Road, Winchester SO23 7RG, UK

Articles by Jakub Cyrta

Anne Mullins

BA

Research Manager, Denplan, Denplan Court, Victoria Road, Winchester SO23 7RG, UK

Articles by Anne Mullins

Abstract

Second phase patient survey audit results were reviewed for 41 practices. Five of the six practices with the lowest first phase results achieved significantly improved results in the second phase. The average increase in a patient perception index between phase 1 and phase 2 for these 6 practices was 6 points. The average increase was only 1 point for the other 39 practices. The five practices recording significantly improved patient perceptions all reported that significant practice development had been taking place between the phase 1 and phase 2 audits. For 3 of these 5 practices their first phase audit result had been a significant inspiration for the practice development. It seems that this patient survey instrument may be inspiring significant development when practices score notably below the benchmark.

Clinical Relevance: Patient surveys may be of value.

Article

Benchmarking against a standard is a central tenet of clinical audit1 and patient surveys have become established as a method of measuring patient perceived quality of care.2 Whereas a clinical procedure might have a clear professional standard described in the literature against which benchmarking can take place, benchmarking of patient survey data is not so straightforward.

Busby et al3,4 described a concise patient survey in which results for each practice participating in a voluntary accreditation programme (Denplan Excel) were benchmarked against the average results for other participating practices. This instrument was called the Denplan Excel Patient Survey (DEPS). In DEPS the benchmark is called the National Reference Sample (NRS). DEPS was designed to cover those aspects of care found, in a literature review, to be most important to patients, and therefore central to perceived quality. The ten core questions are reproduced with patient response options in Figure 1. The authors concluded that, because DEPS, in its reporting, was able to indicate when perceived performance was falling significantly below the NRS, it could inform practice development. It would then be hoped that this practice development could lead to improved quality and improved patient perceptions.

Figure 1. The 10 core questions of the Denplan Excel Patient Survey.

Between 2010 and 2013 over 500 practices participated (representing nearly 1000 dentists) in DEPS as part of their membership of the voluntary practice accreditation scheme Denplan Excel.5 Practices are required to run a DEPS every three years, as part of this Standard. Responses were received from 82,226 patients, equating to an average of more than 150 responses per participating practice. The report, which each practice receives after its DEPS has been conducted, often runs to more than 20 pages, depending on how many verbatim comments are received. Results are presented with various demographic sub-analyses. The key results are presented in the ‘Ideal Scores – All Patients' table, an example of which is reproduced in Figure 2. It will be seen that the results for individual practices are plotted against the NRS for each question. The Patient Perception Index (PPI) is also prominent in the report. The PPI is the percentage of ‘ideal’ responses across the 10 core questions received. This is benchmarked against the NRS PPI. The average NRS PPI for this four year period was 74%. The range of PPIs measured in all participating practices was 46–90%.

Figure 2. Results statistically significantly (to 90% certainty) higher or lower than the NRS are highlighted in the practice report.

As part of Denplan Excel Accreditation, each practice is visited by a dentist trained as a Practice Adviser for the Standard approximately every 18 months. Part of the protocol for these visits includes a review to check whether practices have acted upon feedback received in their latest DEPS.

Of the practices participating during these first four years, 70% had a PPI within 5% of the NRS, 16% had a PPI above 80% and 14% had a PPI of below 70%. This paper investigates whether practices scoring below 70% in 2010 improved their score three years later, what they attribute these improved patient perceptions to, and the degree to which they believe that the DEPS results in 2010 might have inspired their development.

Method

During 2013, practices were beginning to have their second DEPS using exactly the same 10 core questions and the same patient grading options as in 2010. Six practices were identified up to the end of October 2013 that attained a PPI of less that 70% for their first DEPS in 2010. (A further 35 practices had conducted their second DEPS during this period.)

Their PPI in 2010 was compared to their PPI in 2013. If their PPI in 2013 was higher than in 2010, the practice received a telephone interview with the lead author. The protocol for this interview was as follows:

  • The Principal dentist named on the survey document was asked to represent the practice in this matter, and a convenient time for the interview was arranged.
  • The Principal dentist was told that the interview was to be about their DEPS results. They were congratulated on the improvement in their PPI.
  • The improvement in PPI was confirmed with them. Statistically significant improvements on any of the 10 core question results were also confirmed.
  • The Principal dentist was then asked:

    ‘Why do you believe that your result improved significantly in comparison with the 2010 result?’

    Their answer was summarized in writing by the lead author in no more than 50 words. This summary was then read back to the Principal dentist for confirmation of its accuracy.

    The Principal dentist was then asked to respond to the following question using the options provided:

    ‘To what extent do you believe that your survey result in 2010 inspired your practice development?’

    Was it either?:

  • Very significant;
  • Significant;
  • Not very significant;
  • Insignificant.
  • Results

    Table 1 shows the PPI in 2010 and in 2013 for the six practices scoring under 70 in 2010.


    Practice Number PPI 2010 PPI 2013
    1 56 65
    2 68 65
    3 69 72
    4 69 73
    5 69 76
    6 67 82
    Averages 66 72

    Table 2 shows the average PPI in 2010 for the 35 practices scoring above 71 and their average PPI in 2013.


    2010 Av PPI 2013 Av PPI
    77 78

    Table 3 shows, for each practice scoring under 70 in 2010, the aspects on which their score improved statistically significantly to 90% certainty. Practice 2 is excluded as its PPI was lower in 2013 than 2010.


    Aspect (Question No) Practice Number 2010 score 2013 score
    Pain (1)Function (2)Appearance (3)Attitude (6)Understanding (7)Explaining (8)Value for Money (9)Trust (10) 66666666 5647618686813779 736124100971006498
    Competence (4)Attitude (6)Understanding (7)Explaining (8)Value for money (9)Trust (10) 555555 818680784577 959489905788
    Patient function (3)Value for money (9) 44 5332 6038
    Understanding (7)Explaining (8)Value for money (9)Trust (10) 3333 84843981 89904987
    Pain (1)Function (2)Appearance (3)Competence (4)Attitude (6)Understanding (7)Explaining (8)Value for Money (9)Trust (10) 111111111 414622657565654762 505230768275735674

    Table 4 reproduces the summaries of the responses to the question:

    ‘Why do you believe that your result improved significantly in comparison with the 2010 result?’


    Practice Number Summary of Reasons for Improvement
    6 ‘I was fairly new to the patients in 2010 and it took a long while and a lot of hard work to get them to fully accept me’.
    5 ‘We had an intensive one year contract with a practice consultant/team trainer. We monitored our progress with secret shopper calls too. We have also recruited extra team members'.
    4 ‘We are explaining things better, especially fees and we give out printed estimates more often now’.
    3 ‘We have made improvements, some structural, which I think the patients are noticing. I have become more relaxed when dealing with my patients’.
    1 ‘We have made some improvements to the decor. We have also made improvements to our team-working through Investors in People accreditation and Teamwork training with Denplan’.

    Table 5 reports on the answers to the question:

    ‘To what extent do you believe that your survey result in 2010 inspired your practice development?’


    Practice Number DEPS Inspiration
    65 InsignificantSignificant
    431 SignificantSignificantNot very significant

    Discussion

    In 2001, Newsome stated:6

    ‘Published studies of dental patient satisfaction nearly always reveal very high levels of satisfaction’. And further:

    ‘The modal response i.e. the value that occurs most frequently, is typically the most positive response allowed by the questionnaire’.

    It will be clear from the average PPI recorded in DEPS to date (74%) that this instrument is providing no exception to Newsome's suggestion. Nevertheless, the range of PPIs measured to date is from a lowest score of 46 to a highest score of 90. Wide variations in patient perceived performance on these 10 key aspects of care quality seem to exist within the 500+ participating practices.

    In selecting the six practices with the lowest scores from the first 41 practices to participate in DEPS, the authors wanted to investigate whether perceived performance had improved in practices falling notably below the PPI benchmark. Table 1 confirms that this was the case for five out of the six and that, on average, their PPIs increased from 66 in 2010 to 72 in 2013. This was a statistically significant result to 90% certainty. Further, the average improvement in the remaining 35 practices was only one percentage point during the three year interval (Table 2). Table 1 shows that Practice 6 had improved patient perceptions quite markedly moving from a PPI of 67 in 2010 to an impressive 82 in 2013. Only 16% of all participating practices score more than 80. Practice 1 has also made big strides towards the benchmark of 75, having reached 65 in 2013 from a score of 56 in 2010.

    Table 3 shows some significantly improved perceptions on the individual aspects of perceived quality measured by the DEPS. Practices 1, 5 and 6, in particular, have improved patient perceptions for many of these important aspects of care. In each case, they have recorded a statistically significant improvement for 6 or more of the 10 key aspects measured by DEPS.

    The idea that the benchmarked results might be inspiring development is supported by the feedback provided by Practices 3, 4 and 5, as seen in Table 5. Practice 1, however, did not believe that the 2010 DEPS result had been particularly significant in inspiring their development. The owner of Practice 6 was unaware of either result until congratulated on the latest result as part of this protocol. They did not have much faith in patient surveys generally, and concluded that the 2010 result was therefore insignificant in inspiring the effort that they had made to improve their standing with patients, who were fairly new to them in 2010.

    It will be clear from Table 4 that, in each case apart from Practice 6, relevant and clearly distinguishable practice development was reported to have taken place within the last three years. For Practice 6, the owner reported the huge general effort which had been made to ‘win patients over’. What also seems to be apparent from Table 4 is that the three practices which have experienced the most notable improvement in their scores (Practices 1, 5 and 6) have reported the most significant efforts being made over the last couple of years. In the case of Practices 1 and 5, a significant investment in team development had taken place. In the case of Practice 6, it was this clear effort to win unfamiliar patients over. DEPS seems to be measuring that the patients have recognized these ‘significant efforts’.

    Practices with a PPI close to, or above, the NRS were not contacted as part of this study. Nevertheless, many of these practices would have received scores on some of the individual aspects of DEPS statistically significantly below the NRS. Indeed, only 38% of participating practices to date have no individual aspect score falling statistically significantly below the NRS. Therefore, 62% of practices will have at least one aspect on which they might be stimulated to ask the question: ‘Why might this perception be occurring?’ This is what they are recommended to do in the Denplan Excel Manual, at a team meeting. It will be interesting in the future to see if these practices, based on this advice, took action and whether this was recognized by patients in their second DEPS. In this respect, Tables 3 and 4 reveal how Practice 4 was ‘significantly’ inspired by DEPS to develop improvements in communication about fees because of their result on ‘value for money’. They were rewarded with a statistically significant improvement in patient perceptions on this issue in their 2013 DEPS.

    Summary and conclusions

    This small study does suggest that a concise benchmarked patient survey may be inspiring significant practice development for some practices scoring notably below the average. What DEPS definitely seems to be doing, however, is measuring how patients positively perceive clear efforts made to improve their quality of care.

    Over the next two years, the second DEPS will be conducted for approximately another 60 practices which fell significantly below the benchmark in their first DEPS. It will be interesting to study whether this positive trend continues.