Sunday 11 December 2011

Omniphobic, huh?

'Slippery when icy' by ksr8s
under a CC license
That was meant in a scientific sense, of course!

'Omniphobicity' is the property that some surfaces possess to repel both polar and non-polar liquids. The former (polar liquids) typically refers to aqueous solutions and those surfaces that repel polar liquids are called 'hydrophobic'. The latter (non-polar liquids) typically refers to oils and those surfaces that repel non-polar liquids are called 'lipophobic'.

One would think that hydrophobic surfaces would tend to be lipophilic while hydrophilic surfaces would tend to be lipophobic. Well, often it is that way. But not always. The degree of hydrophobicity of a surface varies, depending on a number of factors, including composition and structure of the interfacial layers.

Interestingly, the behaviour of surfaces (or 'interfaces', to be more precise) regulates many processes in nature, including some we don't normally think about. For instance, the feathers of aquatic birds tend to be covered by a hydrophobic powder that waterproofs them, while in some cases the hydrophilicity of the feathers is controlled so as to allow some species to submerge into water.

Plant leafs tend to be hydrophobic, as they are covered by waxes. A classical example is the lotus leaf, seen in the video below:



Exploiting the properties of the different surfaces has found plenty of technological applications. TEFLON-coated pans is a humble but extremely common one. Applying coatings to surfaces can modify their behaviour, mostly regardless of the composition of the underlying material. For instance, the following video shows the behaviour of two types of wool cloth when wetted:



Hydrophobic behaviour feels a bit weird, doesn't it?




But, really, the options and possibilities are many. For instance, rendering surfaces hydrophobic, helps waterproofing and grants surfaces 'self-cleaning' abilities. Commercial products for that are in the market since several years and are extremely easy to apply (e.g., nanophos products - btw, I'm not affiliated to them).

Rendering surfaces to dislike both polar and non-polar liquids, however, has proven a bit challenging. You see, apart from achieving the desired surface behaviour, the resulting layer normally has to have some acceptable mechanical properties and reasonable endurance. Recently, a Harvard-submitted paper appeared in Nature describing the creation of 'Bioinspired self-repairing slippery surfaces with pressure-stable omniphobicity' [a more layman-friendly description can be found in the Discover magazine blog entry]. As the title suggests, the surface modification was inspired by the Pitcher plant. The authors suggest that having such a surface could be advantageous in handling liquids, medical applications, applications where anti-fouling and self-cleaning properties are needed (e.g., in the food industry).

That's certainly an interesting technology with a good potential of proving 'disruptive'. Of course, for food (and medical) applications, there are still a number of rigorous checks to make in order to ensure that the particular surface coating is toxicologically acceptable, remains stable in the range of temperature, pH, solvents and mechanical stresses used in food processing, cannot be attacked by bacteria (unlikely, since it is omniphobic) and that it doesn't break down or leak in any way to the food it is in contact with. And all that, at a reasonable cost... If it goes through, I bet we'll soon see some interesting product concepts on the market!

Sunday 4 December 2011

To eat or not to eat (it): The never ending quest for safe food

'Crystal ball take #1'
by Isobel T under
a CC license
Food safety has always been high on the agenda of state authorities, the industry and - of course - the consumers. Asking for the food we have access to to be safe is a reasonable expectation. One that sounds much easier that it actually is!

The more ingredients are employed in food production, the longer the route they follow before ending in a product, the more complex the food supply chain becomes, the higher the number of potential hazards. But, relax, that doesn't mean that the end-product will be unsafe. Practices are in place (and the EU those practices are dictated by law and enforced) to ensure that at any stage hazards are identified and controlled so that the final product is safe. Further to that, the Food Authorities operate sampling programmes and perform inspections so the overall risk is kept at very, very low levels. In fact, I suspect that the risk of consuming spoiled food due to improper storage or handling at home (or in mass catering establishments) may be an equally or more important factor to have in mind when discussing food safety!

Regardless of how things are at present, it is valid to say that a chain is a strong as its weakest link. With new players joining the food production and distribution 'arena', with an increasing globalisation that does have an impact on food production and on consumer trends, with the climate change very much in progress, with incidents of contamination of scale sufficient to affect the food chain, with the evolution of bacteria and other life forms that can affect food safety and quality and - altogether - with the emergence of new hazards, the quest for food safety is really still ongoing.

Still though, as every new food crisis proves, the system is not perfect. Could it ever be? I would say, plainly, no. Pardon my cynicism, but I feel it is impossible to check all food reaching the consumer for all potential hazards. And I mean really all potential hazards, regardless of whether they are considered to be reasonably expected, rather-not-expected or completely unpredictable. Oh yes, the last two are also seen for time to time. The sunflower oil crisis was something like that.

Performing an analysis on a foodstuff is normally restricted to a single hazard factor or - at best - at a limited group of hazards. The analysis for salmonella, for instance, will not highlight the presence of other pathogens, unless a specific analysis for them is also carried out, nor will it reveal high heavy metal concentration, even if that were the case. Currently, a long list of analytical methods is at hand to reliably identify the presence (and quantity) of numerous contaminants in food. But while for each used method attention has been put in ensuring specificity, accuracy, reproducibility, etc. there is no easy way in testing food samples for their safety altogether.

Is that critical? Well, maybe not yet. But it may well make sense in times of tight budgets. It may also prove useful as a line of defense against emerging hazards, including bio-terrorism. The latter sounds a bit exotic, true (although it's not entirely a new story). But it remains a concern and it's always better to be ready and alert! Going slightly off-topic, it seems that DARPA has been identifying a number of dark possibilities (some of which could also be deployed via the food supply channels) and tries to identify viable counter-measures....

Today, to my knowledge, there are no methodologies practically applied to indicate food safety altogether.

Sensory analysis can give hints on the state of foodstuffs but it would only detect hazards that would affect the sensory profile of the tested sample; the simple presence of many of the common food pathogens, the existence of chemical contaminants, etc., would normally go unnoticed.

Intelligent packaging advances have taken steps towards that direction (e.g., microbial growth indicators, time-temperature indicators, shock indicators, etc.) but their deployment has been limited, mostly because of cost issues.

A number of spectroscopy and imaging (e.g., hyperspectral imaging) techniques have been developed but they are indented for the measuring of quality parameters for specific foodstuffs, mostly where there is the necessary monetary driving force (e.g., meat quality estimation).

The combination of new-generation, biotechnology-produced sensors, employing cell membranes or living cells or enzyme-containing membranes also seem promising as analytical tools. They tend to exhibit specificity but, given their low cost, they could be assembled into kits suitable for a pool of tests on a single sample.

I believe that we should go much beyond that. We should keep existing analytical methods and work on improving specificity, sensitivity, etc. but we should also develop tools that would allow a first yes/no assessment of the safety of foodstuffs. I wouldn't be surprised if that would come together with a generous error margin, initially (false positive and false negative results).

Maybe we could get some inspiration from the IT world. Take spam filters applied to e-mails, for instance. They have been employing a number of technologies. Initially, they were looking for specific words in the subject line or in the e-mail text. Others were trying to evaluate the 'reputation' of the originating e-mail server, checking against public blacklisting databases. Nowadays, however, spam filters are much cleverer and more adaptable. They evaluate e-mails against a set of rules and the have the capacity to 'learn' as you (the user) re-classify false-positive messages that were incorrectly flagged as 'spam and false-negatives that ended up together with the normal e-mail.

Would it be possible to define a set of criteria that would allow a machine-learning system to be used for evaluating the overall safety of a food sample? For sure, deploying such a system at a world-wide level would give it a lot of data to use for refining its model (the 'learning' process). Would that make a good first line of defense? Could we devise cheap and effective methods to fill in any gaps that such a system would have?

The IT world has also been using other methods that may be of interest here. Computer viruses, for instance, have been sought for using a signature-search approach, initially. That has considerably evolved - as computer viruses became more complicated and the systems that they targeted also changed. Process emulation is one of the alternatives employed.

For the record, the spaceships of the Star Trek series were lucky enough to have 'biofilters' that could identify and remove 'anomalies' in the matter they were processing, including pathogenic agents.

Thinking aloud, I assume that the ultimate test would be systematic consumption and observation but - obviously - that is not an acceptable approach! Could we, perhaps, have an ecosystem of bacteria that could do that for us? Their survival, flourishing or decline, in other words the change in the balance of the ecosystem could give clues on the hazard factors present in the sample... Just a thought I feel it might be worth looking into....