Sunday 4 December 2011

To eat or not to eat (it): The never ending quest for safe food

'Crystal ball take #1'
by Isobel T under
a CC license
Food safety has always been high on the agenda of state authorities, the industry and - of course - the consumers. Asking for the food we have access to to be safe is a reasonable expectation. One that sounds much easier that it actually is!

The more ingredients are employed in food production, the longer the route they follow before ending in a product, the more complex the food supply chain becomes, the higher the number of potential hazards. But, relax, that doesn't mean that the end-product will be unsafe. Practices are in place (and the EU those practices are dictated by law and enforced) to ensure that at any stage hazards are identified and controlled so that the final product is safe. Further to that, the Food Authorities operate sampling programmes and perform inspections so the overall risk is kept at very, very low levels. In fact, I suspect that the risk of consuming spoiled food due to improper storage or handling at home (or in mass catering establishments) may be an equally or more important factor to have in mind when discussing food safety!

Regardless of how things are at present, it is valid to say that a chain is a strong as its weakest link. With new players joining the food production and distribution 'arena', with an increasing globalisation that does have an impact on food production and on consumer trends, with the climate change very much in progress, with incidents of contamination of scale sufficient to affect the food chain, with the evolution of bacteria and other life forms that can affect food safety and quality and - altogether - with the emergence of new hazards, the quest for food safety is really still ongoing.

Still though, as every new food crisis proves, the system is not perfect. Could it ever be? I would say, plainly, no. Pardon my cynicism, but I feel it is impossible to check all food reaching the consumer for all potential hazards. And I mean really all potential hazards, regardless of whether they are considered to be reasonably expected, rather-not-expected or completely unpredictable. Oh yes, the last two are also seen for time to time. The sunflower oil crisis was something like that.

Performing an analysis on a foodstuff is normally restricted to a single hazard factor or - at best - at a limited group of hazards. The analysis for salmonella, for instance, will not highlight the presence of other pathogens, unless a specific analysis for them is also carried out, nor will it reveal high heavy metal concentration, even if that were the case. Currently, a long list of analytical methods is at hand to reliably identify the presence (and quantity) of numerous contaminants in food. But while for each used method attention has been put in ensuring specificity, accuracy, reproducibility, etc. there is no easy way in testing food samples for their safety altogether.

Is that critical? Well, maybe not yet. But it may well make sense in times of tight budgets. It may also prove useful as a line of defense against emerging hazards, including bio-terrorism. The latter sounds a bit exotic, true (although it's not entirely a new story). But it remains a concern and it's always better to be ready and alert! Going slightly off-topic, it seems that DARPA has been identifying a number of dark possibilities (some of which could also be deployed via the food supply channels) and tries to identify viable counter-measures....

Today, to my knowledge, there are no methodologies practically applied to indicate food safety altogether.

Sensory analysis can give hints on the state of foodstuffs but it would only detect hazards that would affect the sensory profile of the tested sample; the simple presence of many of the common food pathogens, the existence of chemical contaminants, etc., would normally go unnoticed.

Intelligent packaging advances have taken steps towards that direction (e.g., microbial growth indicators, time-temperature indicators, shock indicators, etc.) but their deployment has been limited, mostly because of cost issues.

A number of spectroscopy and imaging (e.g., hyperspectral imaging) techniques have been developed but they are indented for the measuring of quality parameters for specific foodstuffs, mostly where there is the necessary monetary driving force (e.g., meat quality estimation).

The combination of new-generation, biotechnology-produced sensors, employing cell membranes or living cells or enzyme-containing membranes also seem promising as analytical tools. They tend to exhibit specificity but, given their low cost, they could be assembled into kits suitable for a pool of tests on a single sample.

I believe that we should go much beyond that. We should keep existing analytical methods and work on improving specificity, sensitivity, etc. but we should also develop tools that would allow a first yes/no assessment of the safety of foodstuffs. I wouldn't be surprised if that would come together with a generous error margin, initially (false positive and false negative results).

Maybe we could get some inspiration from the IT world. Take spam filters applied to e-mails, for instance. They have been employing a number of technologies. Initially, they were looking for specific words in the subject line or in the e-mail text. Others were trying to evaluate the 'reputation' of the originating e-mail server, checking against public blacklisting databases. Nowadays, however, spam filters are much cleverer and more adaptable. They evaluate e-mails against a set of rules and the have the capacity to 'learn' as you (the user) re-classify false-positive messages that were incorrectly flagged as 'spam and false-negatives that ended up together with the normal e-mail.

Would it be possible to define a set of criteria that would allow a machine-learning system to be used for evaluating the overall safety of a food sample? For sure, deploying such a system at a world-wide level would give it a lot of data to use for refining its model (the 'learning' process). Would that make a good first line of defense? Could we devise cheap and effective methods to fill in any gaps that such a system would have?

The IT world has also been using other methods that may be of interest here. Computer viruses, for instance, have been sought for using a signature-search approach, initially. That has considerably evolved - as computer viruses became more complicated and the systems that they targeted also changed. Process emulation is one of the alternatives employed.

For the record, the spaceships of the Star Trek series were lucky enough to have 'biofilters' that could identify and remove 'anomalies' in the matter they were processing, including pathogenic agents.

Thinking aloud, I assume that the ultimate test would be systematic consumption and observation but - obviously - that is not an acceptable approach! Could we, perhaps, have an ecosystem of bacteria that could do that for us? Their survival, flourishing or decline, in other words the change in the balance of the ecosystem could give clues on the hazard factors present in the sample... Just a thought I feel it might be worth looking into....

No comments: