Ian Lynch's take on the BECTA fiasco

Ian Lynch's take on the BECTA fiasco


I have recently read an eye-opening email from Ian Lynch about what happened in the UK with BECTA.

I have received his permission to republish here his thoughts. I think his email speaks volumes about what happened.

Ian Lynch's email

Fundamentally, I'm not complaining that we were not successful in the tender - I have no idea how strong the winning bid was. I'm complaining that the tender process adopted was broken. This is despite the fact that 130 MPs signed an Early Day Motion in Parliament last year censuring BECTA for procurement frameworks that block out Open Source.

  • The tender was not fit for purpose.

Evidence: The tender document was not written specifically for this project. It made references to a research project which were indicated to be irrelevant in the e-mail feedback to questions. It is clear that the document was a hurried adaption of another tender document originated in 2005.

The mark scheme is generic and does not adequately reflect the title of the project. It's quite possible to arrive at combinations of marks that produce anomalous results in relation to the project title. eg Winning the tender with no experience in either schools or open source.

(I'm Chief Assessor at an OFQUAL accredited Awarding Body so I do know about assessment)

  • Vital information might not have been equally available to all bidders

Evidence: The details of how marks were allocated and their weighting were not available in the tender specification. Any company previously tendering for a similar BECTA project having kept a copy of this feedback with the guidance would be at an unfair advantage over those who had not had this information. This provides a barrier to entry to new companies and fuels accusations of cronyism. (I do not aim this at the winner, it is not their fault they won a bid, BECTA should have foreseen this potential risk and eliminated it.) All bidders should have been provided with the detailed mark scheme and guidance on allocation of marks from the outset. (Delay the process a week if necessary) Knowing what weight is given to different parts of the spec is important when less than 10 percentage points separates several candidates in the scoring. Tenders should not be about who can best guess what the procedures require, they should be about who is best qualified to do the work.

Look at this link and ask how there can be any doubt about meeting project time scales when at the outset the bid provides more than an order of magnitude more than the tender requirements?

The same for value for money. Only half marks, yet the project deliverables and a lot more are being made available on the day the project starts and with an additional 60k in committed private sector sponsorship gathered in a short space of time and with commitment for more to follow. Why would people motivated to do that not make best use of any further funding?

  • However well a pre-conceived process is followed, the outcome is what matters. We say to children taking tests, is the answer sensible? If not, what is wrong with your process?

Here are some suggestions for improvement.

  1. Ensure tender documents are prepared by someone knowledgeable in the field in which the tender is targeted ie School Open Source Communities and Open Source. (I would have done so if asked and withdrawn from the bidding process so it's not that such people don't exist - I am known to BECTA)

  2. The required deliverable outcomes reflect current available provision. Schoolforge UK/TLM's starting points are so far in advance of the targets set in the spec it shows complete lack of understanding of where things are at. (Incidentally given that, it's a mystery as to how we didn't score full marks on the ability to deliver since we already delivered to that level with no public funding)

  3. Ensure that the structure is more specific and with better guidance to reduce the need for repetition. I found the separation of timescales from targets strange. Seems much more sensible to integrate them since a target isn't a target without an end point.

  4. Provide the mark scheme and ensure how to achieve the marks is clear and transparent. This is simply just good assessment practice. I can provide training.

  5. Move away from simplistic numeric scoring and use "Essential", and "Desirable" to eliminate bids with key omissions and fine tune with numbers only in areas where that detail is meaningful. In a tender entitled Schools Open Source Project, extensive experience of working in schools and open source communities should both have been essential. Working in projects involving both, desirable, to reflect the title. I took this to be what was meant by "similar projects". Obviously the tender evaluators didn't interpret "similar" in the same way. That scope for ambiguity is a serious weakness in such an important area. Graduated scores can then be applied in those essential areas once the elimination has taken place.

  6. Phasing of funding is not a value for money issue, it is a technical mechanism to reduce risk. If you simply state the preferred method and ask the respondents to say if they agree, it avoids the tenderer trying to second guess what specific procedures are required and is much fairer. For example, in the SF bid and again at interview it was stated that the bidders were prepared to do anything BECTA wanted and that we would not pay anyone until work was complete keeping money in Escrow only released by authorisation of BECTA if that is what BECTA wanted. A mystery as to why that scored only a third of the marks on that aspect since any possibility of money being lost or paid out for substandard work was eliminated.

  7. Consider the difference between validity and accuracy in assessment. You might well use a numeric score system accurately but if the criteria behind the numbers don't accurately reflect the title of the tender the entire exercise becomes invalid. If the weighting of the numbers allows a technical issue like phasing funds or specific processes that might or might not be practically important to outweigh extensive experience in the subject matter of the tender, use a different method.

"Open Source projects demonstrate that some forms of control can be counter-productive and the key is to optimise the balance between control and freedom to motivate productivity. What matters is the overall outcome in terms of value for money." (quoted from the bid) This is why a bid about open source should look a the wider issue of quality assurance rather than simply control - this suggestion seemed to lose marks. Assessors need to get up to date with quality management in relation to open source projects.

Category: 

Comments

marielle's picture
Submitted by marielle on

"The required deliverable outcomes reflect current available provision. Schoolforge UK/TLM’s starting points are so far in advance of the targets set in the spec it shows complete lack of understanding of where things are at. "

Yes, I am afraid, this is too often the case. As a beneficiary of what gets produced with funding by the likes of Becta, my opinion is that what gets mostly produced with a Becta-type of funding is something that has no impact whatsoever (doesn't get to be used by more than 50 persons) and typically dies within 3 years. Funding typically covers the development of brand new project designed from scratch, never the support of projects that already got out of the ground by their own effort.

This is in some part due to the fact that these institution are used to award research funding. In that context, a research is being carried on. A report is being written. Two or three years after the funding has been awarded all deliverable have been submitted. They are made available in some form or another (pdf, book, presentations, etc.). End of the cycle. The researcher will apply for another grant, proposing to explore new ideas.

With software, CMS, or web services a completely different model is required.

I am afraid, most persons working in education institutions (government funded or universities) have little or no knowledge of software development processes (I include there CMS and other solutions). They are a lot more familiar with the discourse of a bunch of consultants used to write research reports than with the one of developers with proven records.

As an academic researcher I have on many occasions be exposed to the same frustration expressed in this article. It was a piece of cake to obtain funding for some obscure research that nobody cared about, that would have limited impact on the field, and that was anyway unacceptably compromised by the use of suboptimal data analysis techniques predominantly used in the field. It was seemingly impossible to obtain any funding to help build a better infrastructure that would have contributed to better quality research in the field at large. On typical research proposals, informed comments were received and decisions on the whole perceived as rather fair. On the second type of proposal, I always felt that what had got me miss out was not the poorness of my ideas but the abysmal lack of knowledge that transpired in the comments made by the reviewers. Lack of knowledge of what was already available and what had been technically possible for a few years.

For these blunders to stop to repeat, it is important to take the time to educate the members of the education institutions. I don't expect, however, such attempts to be very successful. It is quite difficult to understand the value of a few points being made for a person with no development experience whatsoever. A better approach is perhaps to accept that big institutions are resistant to shift/change and that new ways of supporting worthy projects need to be found.

Author information

Tony Mobily's picture

Biography

Tony is the founder and the Editor In Chief of Free Software Magazine