FRM9 vs QAR06

Home Forums Data issues FRM9 vs QAR06

This topic contains 3 replies, has 3 voices, and was last updated by  Ruth CJ 4 days, 11 hours ago.

Viewing 4 posts - 1 through 4 (of 4 total)
  • Author
    Posts

  • Ruth CJ
    Participant

    Hello,

    Anyone else had this issue? We very rarely have anything on our FRM9 report. It’s meant to show, amongst other things, all enrolments we coded as 41, but that haven’t appeared in another provider’s ILR doing an apprenticeship.

    We then got hit with a list of students in a QAR email in September, of students coded 41 who haven’t appeared in another ILR. Some of these left us over 6 months ago. Why didn’t they show up in FRM9? Now we have no time to contact the students to work out what happened. We fully believed they were leaving us to start an apprenticeship. I feel this is unfair, as we weren’t given the chance to do anything about this sooner. I feel that it’s unrealistic to expect us to routinely follow up all our students that withdraw to start an apprenticeship elsewhere.

     
    #405891

    jessicar
    Participant

    Hi Ruth,

    Same here, but seemed to be that FRM09 excluded FM25, whereas the Sept letter included FM25.

    Don’t think any of the FRM reports look at FM25 in 1819?

    Thanks

     
    #405905

    adambetts
    Participant

    This is part of a bigger pattern of the Agency going Big Data, wiring up our data returns to other sources and the full sector dataset, and then hitting us with things that don’t look quite right. We’re now up to 60 odd DSAT reports, 24 FRM reports, new QAR monitoring reports, plus all of our own internal controls, plus a dozen or so data lock reports.

    Would it not be better to take this completely out of our hands, and if the learners appear on an App at another provider, let the QAR work it out, not expect us to, and send out last minute gotcha reports? Same goes for all manner of other checks, monitoring reports etc,. if the data/tech is there to say something is wrong, don’t fund it, or calculate performance measure to take it into account.

    The proliferation of these reports is a symptom of a bigger and more strategic issue with volume, complexity and handling data. The burden, risk and cost is always shouldered by providers.

    Adam

     
    #405909

    Ruth CJ
    Participant

    I do like it being in our hands. There is a multitude of reasons why our data may not be exactly as expected, and only we would know the answer. If they just took money off us I’d have a massive problem!

    These QAR reports we’ve just had are all good with me. I’m happy to respond, I just wish we had them sooner. I’ve got enough on at this time of year without fixing more errors that I wasn’t able to spot earlier, or writing detailed explanations about why our data is fine.

    One we had was “We have identified learning aim records in your 2018 to 2019 ILR where you have reported a learner as leaving learning before 1 August 2018, but the same learner was reported as continuing in learning in your 2017 to 2018 R14 ILR return”.

    That was where we identified after the hard close of 17/18, that a whole cohort were on the wrong aim. We followed the 18/19 Provider Support Manual 376-377. We didn’t report the over claim, as it was matched by an identical under claim (all wrong aims were just the old version, with the same funding level). I don’t want anyone assuming anything about these! I have a perfect explanation that anyone outside of our College wouldn’t know.

     
    #405932
Viewing 4 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic.