Allotment potato crop weights - analysis of test results
We have many gardeners and allotment owners testing samples of SF60 to see what impact it has on vegetable growth.
Alan's Allotment Man Vs Slug reported their potato results back in August. He’s been patiently waiting for us to help explain why he got 38% increase in crop weight when adding 20% SF60, but saw a decrease with 10% SF60. Why was there no straight-line association!
The tabulated and graphical results of the total crop weights are below.
Good enhanced growth 20% SF60, but the 5 and 10% results were pretty much in line with the control soil mix. This left us with the task of explaining why the 5 an 10% had not given small but noticeable – we could reasonably have expected a straight-line gain.
There are three groups of possible causes
- Human error
- Physical, chemical, biological items errors – science we did not account for
- Mathematics of growth trials
We’ll look at each and explain why we think the most like cause is “maths”.
Let us start by reaffirming our belief that we learn by looking at cause and effect. Investigating what ‘went wrong’ is not aimed at trying to blame anyone or avoid issues if there are any with the product. Our testers and SoilFixer all have limited time and resources. We are all testing with the knowledge that plant growth testing is massively complex. It can take millions of pounds and 5-10 years to fully test and prove fertiliser type products. It is worth noting Alan has a background in engineering and is used to testing techniques.
Things can go wrong – from very simple things like incorrectly mixing the samples, mislabelling and mixing up samples, poor mixing leading to inconsistent results, watering one plant more than the other. The list can be very long!
Physical, biological, chemical reasons
These can be test design and setup issues – for example some plants have more sun, others are in shade. These can be manufacturing issues – eg the samples sent are from dis-similar batches, or the batch was not made to the correct specification. The batch might not be homogenous which is then passed on when the pot mixes are created. Some plants might have been infected with disease – even if not “visible”. Some plans might have been infested with aphids and others not. The list again can be very long!
With so many variables, designing plant growth tests can be challenging.
Now let’s look at the maths
In any plot, garden or field; plants grow to different sizes some are large and some much smaller. Statistics comes into play – we have a mean weight (or size), average, deviations from the norm etc.
If we simplify this and create a graph of plant weight in grams versus soil mix (0% control, 5%, 10% and 20%), here’s what we could expect just from the random nature of growth.
We would expect a range of plant weights or sizes. We would also expect a range of weights within each set – ie within the control, 5%, 10% and 20%. We would expect the average weight to increase as we added 5, 10, 20%.
Many will already know the statics and maths better than me, but it is nice to show a simplified graph and then add possible scenarios which account for the results seen.
Note: even in well controlled plant growth tests, the range of results can vary by 200% between best and worst case plants in each set.
The graph plot illustrates that by random chance we can get negative, neutral and positive results from single data points.
In plant growth trials, even with tight control of all variables, to remove random variation, there still needs to multiple plants in each set to create a representative average – at least three, preferably 10. It is common to test 100s and indeed 1000s in a full scale randomised trail.
Whilst it is entirely possible the results are the result of errors in formulation etc, with only two replicants in each set, it is more likely that the lack of any apparent straight line association from 5%, 10% and 20% is due to random growth. For the keen-eyed reader – it is also entirely possible that within this test set, the 38% increase was also a random chance event!
For those interested – the results from Jason Daff at the Sainsbury lab where many replicates were used will be interesting. Our own tests used minimum of three and where possible five or 10 plants.
PS this does not mean these tests have no value. Many single tests can be used to point to the bigger picture. If 10 people all see 38% in the 20% addition, then this is support for the overall growth enhancement.