“I Heard He Had a 10 bu. Difference”
As you continue to prepare for the next crop season, a lot of people are going to offer you all kinds of products and ideas in an effort to influence your management decisions. The fact is, agronomy – crop production science – is so complex that no matter how much money companies, universities and groups like the ISA On-Farm Network® spend on research, there are very few crop management decisions you can make with absolute certainty of the outcome. I’ve said this before, but we’re all influenced, both consciously and unconsciously, by biased information.
This bias can come in many different forms, many of which come back to basic human nature and a desire to “get it right.” In my recollection, just about every person who has ever talked to me about their experiences in a casino told about how much money they won. But yet, the last time I checked, the casinos haven’t gone under, so obviously, not everyone who puts their money on the line there comes out a winner.
So how does this compare to farming? Well, farmers – maybe even researchers – are more likely to talk about the successes – the ones that give them those super high yield numbers – than about their misses (unless it was a really big mistake). As a result, even if a given product or practice worked only in three out of ten farm trials, the farmers with the three wins are more likely to talk about their trials than the growers whose trials that didn’t perform. The larger the yield bump, the more the experience is shared to other growers. The net effect is the largest responses are the ones that you hear about, not the ones where performance is so-so or even negative.
This bias is not limited only to farmer talk. The scientific journals tend to focus on trends they see and less so on trends they don’t. The net effect is that if a product/practice performs in just one area or condition, it will often get published or reported. The conditions under which it does not work are often not recorded because the efforts are focused on positive relations. For example, I doubt you will find an article reporting that playing the radio in your cab real loud won’t reduce yield. However, if we generated credible data showing such an effect, everyone would hear about it.
Another bias can be caused by preferential interest in trials that show a big difference early in the season. If a number of trials are implemented and only one shows early differences, that site tends to get a lot more attention, may be checked out more often during the season, and even possibly used for collecting additional data. It also tends to bias the efforts to ensure it is harvested or the data collected. Harvest time is often hectic and sometimes some sites are lost or data not recorded properly in the rush to get the crop in the bin. But you can be sure that the sites expected to be the most responsive are given a higher priority for follow through.
And when it is time to report the results, I usually see the cover of a magazine with a picture of a dramatic treatment difference. I don’t think I have even seen a picture where they said “see – no difference” on cover.
So the take home message is, try to assess the overall probability of a response, not just the extreme you heard about at the coffee shop or a customer meeting. If you don’t believe you have access to enough information, help collect it this next season. The On-Farm Network can help you with the design of replicated strip trials to test almost any product or practice that will yield reliable, useable data for your farm. If you’d like to know more about what other growers are testing on their farms, attend the annual ISA On-Farm Network conference on February 16, or go to www.isafarmnet.com. You can register for the conference by clicking the 2012 Conference link on the website.