Effective use of data literature review: Interpreting and analysing data

Educators often report a lack of confidence in the statistical and analytical interpretation of data and should be supported to develop relevant skills and knowledge.

Educators often report a lack of knowledge or skills in specific areas including: IT skills, interpretative skills and content knowledge.

Identifying key measures

Educators should be familiar with a range of key measures. These can be classified as ‘outcome’, ‘process’ or ‘balance’. Measures should be embedded into improvement plans and baselines established.

Qualitative measures may be limited to surveys and questionnaires. Additional methods should be used as appropriate, such as pupil profiling, focus groups or content analysis.

Formative assessment or frequent but ‘lower stakes’ diagnostic assessment can have a more direct impact than less frequent, ‘higher stakes’ summative assessment.

Schools and settings sometimes report difficulty in tracking the progress of learners with complex needs. Metrics and measures don’t always ‘fit’ and consideration should be given to ensuring appropriate measures and tracking are in place for this group.

Observational or statistical interpretation of data

Local authorities can support schools and settings with summative analysis through the provision of data profiles or data packages. Consideration should be given to ease of interpretation, relevance and the best timeframes for material to impact on improvement.

Local authorities can also support schools with tracking systems, statistical analysis, professional learning, policy, guidance, frameworks and advisory visits that support interpretation.

Educators can be supported through the use of language or an analysis framework. This should help them to draw out observational statements such as: ‘X has increased by X% since X’ or ‘there is a gap of X% between X and X’ or ‘I notice an upwards trend in X.

Looking at ‘live’ or current data was found to be most beneficial for improvement. The most effective schools did not wait for local or national aggregation. Educators should gather and analyse improvement data frequently and look for patterns, trends and variation. This can then be immediately reflected on and responded to.

It is important to understand variation in data, when it is to be expected and when it may indicate that a change has occurred. Statistical significance is important. For example, 1 child in a class of 10 is more statistically significant that 1 in a class of 35.

Analytical interpretation of data

It is important that leaders and practitioners are able to interrogate data in order to be able to make comparisons and identify trends and patterns across cohorts, over time and for specific demographic groups including learners with key characteristics.

The most effective schools also look at data in terms of those learners who may be ‘at risk’ or ‘borderline’ for their age and stage. This was found to be effective for targeting and closing gaps.

It is important to intersect data and evidence to understand why a pattern, trend or gap exists. For example, if writing is identified as an area of low attainment in a primary school educators may look at a variety of data and evidence to understand ‘who’ and ‘why?’

Classroom practitioners should be supported to make the connection between high level data, whole school or departmental data and classroom level data, pedagogy and provision.

The role of Assessment for Learning (AfL) is key to improvement in the classroom. Afl strategies can be utilised to engage learners in understanding their progress.

Using comparator data

Schools and settings should utilise local and national comparator data to:

  • share practice
  • reflect on their own progress
  • consider alternative and more effective approaches
  • create a culture of curiosity around data and encourage dialogue and reflection

Research from the London City Challenge found that some schools were not utilising data within their family or cluster groups effectively. Providing a clear rationale for data sharing activities alongside modelled application opportunities increases the chance of success.

Research shows that between-class variation in learner outcomes is typically much greater than the variation between schools. It is also important to consider how data is looked at and compared internally as well as externally.