Friday, 26 March 2010

"And finally, Monsieur, a wafer-thin mint." *


Update (28/03/2010): I 've just read on the BBC News (again) that the Food Standards Agency in the UK is asking food manufacturers to start making available smaller packages of the unhealthy snacks, as well as to cut saturated fat in foods like biscuits and cakes.


I was really intrigued by the BBC News article regarding the change in portion sizes depicted in the Last Supper through time. The article was saying that, with time, the food present on the Last Supper table was getting more and that the portion sizes grew larger!

I always suspected that we eat more than once people used to. And I could imagine that the everyday visual stimuli (read: advertisements) would point to that directions. But I wouldn't easily say that even classical, artistic themes, such as the Last Supper, would be affected by what we tend to do in our everyday lives.

The evolution of the portion size is getting more and more into focus. People seem less skilled in properly controlling the amount of nutrients they take. And that is understandable. Food was - and still is - a human need. But it's also an element of social life. Not to mention that a range of disorders can affect the food intake in terms of quality and quantity.

One of the trends in the food industry - consumers interface, is the increase in the amount of information food producers make available to consumers. Food labelling regulations in Europe do not require the provision of nutritional information, unless a nutrition or health claim is made on the food label. However, more an more often, such information is featured on packaged food, while catering businesses have begun adding such information next to their dishes in the menu or on the packages of take-away food. According to the Examiner, the new health bill in the United States may make displaying calorie content compulsory for restaurants, which is certainly an interesting development.

Consumers have - at times - expressed confusion over the various food labelling schemes. In Europe, the GDA nutrition labelling scheme is gaining popularity amongst food producers. The GDA scheme informs the consumer on one or more nutritional attributes of a food product, refer to the absolute and relative content of a portion of the said foodstuff, taking into consideration the Guideline Daily Amount for that nutrient. Most implementations of the GDA scheme include reference to calories, sugars, fat, saturates and sodium. However, the list is being enriched with additional elements.

Part of the debate at the food industry stakeholders level is what to do with the portion size. There is no strict, formal definition of "portion size". Every food producer can choose what they judge as appropriate for their product. But critics argue that setting on the label of an 100g-pack of crisps the portion size to 40g is, simply, not realistic. I couldn't agree more. Yes, it might be prudent to eat only 40g of crisps but come on.... No, I'm not trying to hold responsible the food companies for my extra kilos of body weight. But I, too, support that "portion size" in everyday life is defined more in terms of convenience rather that nutritional content. In other words, the portion is the bag of crisps and not the amount that I should consume based on the nutritional profile of the said crisps.

It is a fact that people can make considerable error in fixing a portion for them. Even the size of the plate the food is put onto may play an effect on how much people will consume; the Small Plate Movement is encouraging people who want to loose weight to use smaller plates.

The debate on portion sizes is certainly no mere philosophical discussion. There are implications at many levels. Risk assessment, for instance, is done on the assumption of a certain portion size (although, depending on the case, a "large portion" may be taken into consideration) - see here for an example of how risk assessment is made. Assuming a smaller portion size may lead to a lower projected risk.

Indications that consumers adopt a one-package=1 portion approach in some cases, causes nutritional concerns, especially when the foods in question can contribute to a higher calorie intake or feature a high glyceamic index. In the case of people with existing disorders such as diabetes or cardiovascular problems the issue escalates to health concerns.

And then, there is the possible impact on food prices and the environment because of the need for additional packaging material (also, higher transport cost per food unit, etc.).

However, I admit, the portion size alone is no panacea in the nutritional problems of - mostly - the western world. Surely, in the good nutrition game, consumer education, possibly from the younger ages, cannot be ignored. After all, eating well is much easier when it becomes a largely practiced habit.


* A line from a rather disturbing scene featuring the dialog between Maitre D and Mr. Creosote from the Monty Python's The Meaning of Life.


Monday, 15 March 2010

The science in the (kitchen) cupboard

Photo of flying pop-corn

It is interesting that when people want to describe something complex they refer to it as "rocket science". On those grounds, I guess introducing oneself as a "rocket scientist" (or as an aerospace engineer) in most social occasions would cause plenty of heads to turn to oneself. Now, I wonder if introducing oneself as a "food scientist" would have the same effect.....

I admit - I've never tried it. But my gut feeling is that "food science" scores really low on the coolness scale most people maintain. And that is totally unfair!

Food science is not an isolated island in the sea of knowledge. Instead, it would be better described as a large group of islands: food science is about chemistry, physics and biology applied onto systems of considerable complexity. I know. It doesn't sound convincing. So let me give you a couple of simple examples:

Example 1: Pop-corn physics. You know the story: You take dry corn seeds; you throw them in a saucepan with a bit of oil; you warm the saucepan; after a while the corn seeds violently explode into yummy white-ish flakes. So what has happened? Well, basically, two things have happened: Firstly, the water inside the seed turned into steam, which, with when heated up to about 170-200 oC, it raises the pressure inside the corn seed to very high levels, until the seed hull finally breaks with a explosion. Secondly, the starch inside the corn seed changes it structure to a higher volume "jelly" form. When the seeds explode, the water steam - starch mixture breaks loose and rapidly expands; at the same time, the steam escapes to the atmosphere leaving back the familiar, starch-made, foamy structure that we call pop-corn.


Example 2: Corn flour physics. Corn flour is a rather popular ingredient. In cooking, water-dispersed corn flour often functions as a thickening agent, which, when warm is thin-flowing, but when it gets colder it assumes a jelly-like behaviour. Kids are a bit more familiar with the corn flour slime, which, is unarguably considerably more fun: Just slowly start adding corn flour to a bowl with a bit of water, while stirring to ensure homogeneity; once you reach equal amounts of water and corn flour, the mix will become thicker; keep adding corn flour slowly and you will reach a point (which depends on the type of corn flour you use and the temperature), where the mix will be changing to a "solid" form under rapid stirring and melting back to the liquid form when the stirring stops.

Why that strange behaviour? Well, corn flour contains a lot of corn starch. Starch is carbohydrate molecule, comprised by lots of glucose molecules linked together, forming a long chain. Starch chains also feature smaller glucose chains (branches) attached onto the main chain. When in solution, the neighbouring starch chains can interact with each other. At a certain starch concentration, stirring or agitating the solution helps the individual starch chains hit and -briefly- stick to a high number of nearby chains, thus creating solid-looking blobs of starch. When the agitation stops, the chains go back to their original, "untangled" state, thus giving the solution its liquid-like look.

Corn-starch solutions are non-Newtonian fluids, which at certain starch concentrations behave as rheopectic (shear-thickening) ones. As you may suspect, depending on the corn starch solution and the agitation frequency, a number of cool effects can be seen. Check out the video below:


Beyond plain fun, fluids with such properties have a wide range of practical applications, from power transmission in mechanical systems to enhanced performance in bulletproof vests (traditionally employing polymer fibres).

I guess that the humble corn flour doesn't look that naive anymore, does it?

:-)


Sunday, 28 February 2010

Food in times of crisis

airdrop of humanitarian aid

Natural disasters do strike. And when they do, man-made infrastructures are not guaranteed to survive. While people can - in some cases - be protected by early warning systems and effective evacuation plans, it is the magnitude of disruption to utilities (water, electricity), transportation and communication infrastructure that will affect the final size of the disaster's cost (in lives and in money).

With two such cases very recent - the earthquakes in Haiti and in Chile - a number of questions come into my mind: how can one, effectively, get food and water to people in need? What kind of food? Should people rely on airdrops or should people/ local authorities maintain a small stockpile of supplies for such cases? One could argue that food (and water) are perishable goods, which makes keeping stocks a - possibly - expensive exercise. But, on the other side, the availability and quality of those elements are essential for the health, welfare and morale of people affected by the disaster.

Interestingly, there is an ongoing project on "Food For Crises: Developing an option for humanitarian aid"; the project is the objective of a joint thesis within the European Masters Degree in Food Studies. The students assigned to the project try to design food "products" suitable for distribution in such crises cases. They are taking into consideration nutritional value and health benefits, health risks (e.g. allergies) and labelling, cultural parameters (e.g., regarding the composition of food), shelf-life and ease of use, suitable packaging to allow for rough handling, etc. 

When it comes such purpose-designed food, cost is an important parameter. The technology to achieve all the objectives mentioned above in a single, tailor-made foodstuff item does exist but it may drive the cost to prohibitive levels. It is essential to employ technologies that are either easy to cheap to implement or technologies that can - in parallel - find wide commercial application, which can help towards a quick lowering of their cost. Fermented foodstuffs with select microorganism strains is an example of a technology easy to implement (although many of the fermented foods we consume on a daily basis are far from ideal to distribute in times of crises). Packaging technologies belong to the second category.

Then, there is the question of nutritional content. Ideally, each consumer group would have their own emergency-food. But in practical terms, I feel that this would be unrealistic. It is not so much the problem of cost but rather the way to manage such supplies, both when designing the shipment of humanitarian aid and when receiving it. Especially in harsh conditions, where the recipients are the people affected directly, the margin for an organised, individual, nutrition-based distribution is next to impossible. 

I am no expert on the issue and, I admit, I have little knowledge on how humanitarian aid missions are designed and implemented. But I believe that revisiting the topic - at least to whatever has to do with the foodstuff part of the equation - may be a productive exercise, possibly indicating cost-effective ways that foods we have now can be made better with regards to any of the parameters that would matter in a disaster situation (shelf-life, nutrition content, cost, ...). To a somewhat similar direction, the EC recently had a call on "Health-value-added food products for population groups at risk of poverty" (KBBE-2010 general call). It would certainly be interesting to see the directions that the people, who will get the project on that topic, will follow!


Saturday, 27 February 2010

Sunday, 21 February 2010

Food choice - a reading game

Fondant & Ice cream
Nutritious food; gourmet food; fast food; healthy food; baby food; convenience food; organic food... Food constitutes a human need tightly integrated to most sides of our social existence. In several places around the globe (but not everywhere), people have access to a considerable variety of foodstuffs, while new products pop up on a daily basis, often dynamically co-existing with traditional ones at nearby supermarket shelves.

There, consumers have the chance to choose. A number of factors are known to get in the middle, including biological, economic and social factors. Understanding the process of making a food choice, is certainly a hot desire for the corresponding sector these days. And it's not only the marketing pressure, as you may think. Surely, the food industry would love to make products that are (or can become) more appealing to consumers. But since food is closely associated with other things like health, it would be really useful if the choices people would go for, would also be "healthy" ones.

But there is a thin line somewhere there! Yes, food does affect the functions of the human body. Although research is still ongoing, there is clear evidence that food and the function of the nervous system, of the immune system and of the metabolism - to name a few of the systems/ processes of the human body - are related. But to what extent can food, on its own, prevent or cure diseases? If a food-health link is substantiated for a specific foodstuff, could food producers go ahead and inform the consumer on the health benefit of that food?

In Europe, nutrition and health claims are governed by Regulation (EC) 1924/2006. That Regulation places restrictions on what can be claimed of a food label and provides templates for a number of claims. Any health claim made on food labels must be true, not misleading and clearly understood by the average consumer; the claimed benefit should be achieved by reasonable consumption (specified by the producer); it must not imply that the by not consuming the food in question the consumer's health will be negatively affected; it should be accompanied with notes on the importance of a healthy, varied diet and a healthy lifestyle, and warn consumers on potential hazards associated with excessive consumption.

Regarding health claims, the Regulation discriminates across several categories:
  • Health claims that have to do with the general function of the organism
  • Health claims that refer to psychological or behavioural function
  • Health claims regarding slimming, satiety control, etc.
  • Health claims on the reduction of the risk of a disease or the health or development of children
The authorisation of each new claim depends on the category it falls under. However, in any case, claims examined by the European Food Safety Authority (EFSA) need to be sufficiently substantiated by scientific evidence, strong enough to demonstrate a cause and effect relationship between the nutrient or food that carries the claim and the claimed benefit. Don't be mistaken on that; that is no trivial task (e.g., check out the EFSA panel's recent opinion on an application for a health claim of a product containing cranberry extract, or for the function of phospholipids).

Clearly, the law offers - in a controllable way - opportunities for food producers to advertise to the consumers health benefits that foodstuffs may help towards. Critics do exist in both opposing camps: pro-health claims and contra-health claims. However, few can ignore the fact that consumers today can have access to increasingly more information on what they eat. All one needs to do, is take the time to read a label. Although - as some fear - we may be having increasingly longer food labels within the years to come!



Sunday, 31 January 2010

Lights, camera, gourmet shopping...

Sunset cow
Slashdot editors post on food research again! This time their post links to a story covering the work of some researchers at the Gifu Prefectural Livestock Research Institute, who managed to rate high quality Japanese beef (Hida-gyu) using an infra-red camera. Well, to be fair, this is work in progress since the success rate of this optical evaluation method is not too high yet (about 60%) but the aim is too ambitious to ignore: After refinement, it could allow consumers to use cheap cameras, perhaps the cameras that mobile phones are already equipped with, to "measure" the sensory profile of beef cuts at the supermarket before buying.

The article does not provide an extensive scientific background; it seems that the infra red image can reveal information on the oleic acid content of the meat, which can be associated to desirable sensory parameters such as tenderness, flavour and overall taste.

I admit that I 'm totally fascinated by the idea. Smart tags have been in the focus of food researchers for a long time. At the beginning, the objective was traceability. In that sense, RFID tags would be able to hold all the necessary information to identify the origin of the product. But as their data holding capacity improved and the cost began to drop (although it is still prohibiting for most general uses), new applications came up. Time-temperature integrators, for instance, which can easily accompany RFID circuits, can provide data to already developed mathematical models that can estimate the microbiological or quality status of certain foodstuffs. Such systems allow the packaging to provide feedback to the consumer.

So far, however, such systems required investment on behalf of the industry, the cost of which could have an impact on product prices and - at a second step - on consumer preferences. This time, the researchers work on a scenario that uses (mostly) already available technology. I assume that an additional light source and IR filters may need to be employed in order to get readings from a cellphone camera. On top of that, one would also need specific software and - possibly - some means of calibration. Still though, none of those are unrealistic given the computing power of modern mobile phones.

I am really curious to see what will happen when that technology "hits the market". I would expect many more applications to follow that path of development. Also, I wouldn't be too surprised if meat producers started pre-marinating their cuts in olive oil solutions :-)

Interestingly, I have just realised that until now I considered infrared photography to be just another beautiful - yet geeky - kind of art. Well, that is about to change!


Sunday, 17 January 2010

The knowledge in the closet

What's in your closet - hanging by their hooks...
14/01 was a big day for the agro-, food- and bio- people around the world going after European research grants (it was the deadline for the 2010 KBBE call of FP7 - a public research funding instrument in Europe, giving about 53 billion euros to research in the period from 2007 to 2013).

The persistently pending question, however, is: what happens to the results of all that research? Understandably, not all research efforts are successful; and even when they are successful, they don't necessarily lead to tangible results. There 's really nothing wrong with that; research is a venture into the unknown (well, actually it is a venture in the not fully known, but let's not stick onto that for the time being), thus has associated risks, mostly of financial nature. It is also understandable that some of the research carried out will end up calling for further research in order to reach a ready-to-exploit stage.

But what is the amount of that ready-to-exploit scientific knowledge? The last few years (or decades) the need from knowledge exploitation has become a policy priority. I can't judge if that has led to substantial results (I have no means to measure in an objective way) but at least I feel that a higher number of people in universities and companies are rather aware that there are ways to protect, trade and - in general - exploit new knowledge.

The current system for intellectual property protection has been widely promoted as a helpful tool for the quest of knowledge utilisation. While I can see the pluses, I can't help but wonder what could other players do, that have limited access to the resources needed for such a game. And what about knowledge that is already available in an "unprotected" form, that is, either published or unpublished - being kept in a closet full of paper, data CDs and other archiving means.

Surely, even more of that knowledge could be exploited; if not at a big scale, at least at a micro-scale, through a cooperation of scientists will small companies under short-term projects of low, affordable budget; something like the sales that shops have, only for science :-)

As an example, think about the valorisation of the waste from the fish processing factories. The large production plants often produce fishmeal or fish oil out of that, using available equipment suitable for their volumes of production. At low production volumes, however, although the principles remain the same, it is likely that no optimised processes are commercially available, which would be a relatively easy task for an engineer to design. Interestingly, the original research on that must have worked on laboratory-scale volumes and, thus, is likely to be closer to the desired application.

The same applies for most of the waste outputs of farming, where biotechnology could provide solutions, sometimes with no further research being necessary. One could argue that the driving force of additional income from such as effort is simply non-existent in those cases; the value of the products derived from those exploitation processes is only achievable if one has a distribution and sales network reaching the right market. And then, there is the risk of producing surpluses of secondary products, thus leading to a drop of their market value. True and true. The right answer depends on the actual case but, in general, it takes an "unbalanced" action to break a vicious circle.

Cooperation frameworks around that idea have been tried in a number of countries with encouraging results (e.g. the innovation vouchers scheme that has been tried in many places, including Ireland, the UK, the Netherlands, Greece, etc.). However, with big grants around asking for ambitious research, the priorities of most researchers are not shaped towards "low-tech" cooperation with small enterprises. While I admit it would be stupid to suggest throttling the funding for innovative research, I believe that a stronger mandate for exploitation through small companies should begin to form.

Yes, there will be implementation problems (e.g., how many small food companies would put innovation as a priority instead of growing production volumes or sales figures? how many of those companies would be willing to participate in such schemes?). And yes, the existing legal framework may not be very flexible around food innovation (e.g., putting a health-related claim on a foodstuff is not a trivial process - and that has a pretty good reason behind that - I might add). But the potential benefits are many-fold:
  • "Older" knowledge or published knowledge could find application in a way that could further benefit the original researcher or research group
  • "Older" knowledge or published knowledge could be transformed to practical innovation at a higher pace than entirely new, breakthrough knowledge
  • Small companies will get exposed to working together with scientists and vice-versa; possibly a beneficial exercise for both groups
  • The public profile of food research will improve
  • The mobilisation of private funds for research could be encouraged (many small sums of money instead of a few larger ones)
  • The competition between food producers would benefit - even at the regional level
It might be worth considering it in a more thorough way, especially now that the global financial crisis has reminded to us that "big" doesn't necessarily mean "stable".


Sunday, 10 January 2010

The nut factor

roasted peanutsI was idly going through slashdot on Friday, where I came across a link to a story on the Globe and Mail. Apparently, the Canadian Transportation Agency told Air Canada to provide a nut-free buffer zone in their airplanes. Air Canada had already stopped serving peanuts on-board, but they were still serving cashews nuts. That move was prompted by the complaint of a passenger that was severely allergic to nuts.

Interestingly, the Agency considered a severe food allergy to be equivalent to a disability that the air carrier should take into account in the service they offer.

I imagine, the easiest way to comply with that decision is to stop serving nuts altogether. Isolating a part of the plane and making it allergen-free is technically possible but it won't come cheap (think of air filters or at least air-curtains, special floor mats, regular surface cleaning, control on who and what enters that zone, zone autonomy - e.g., restrooms, and a procedure to control those things).

Allergens in food have been receiving increasing attention lately. In the EU there is concise legislation in place that requires foodstuffs to identify on the label any allergens that are contained as ingredients (either as main ingredients or as carry over substances from one of the main ingredients or from one of the treatments the foodstuff or one of its ingredients was subjected to) as well as any allergens that are likely to be present in traces (e.g., due to contamination during the production process). The list of allergens is regularly reviewed and revised. When needed, EFSA also looks into cases of allergens used as technological aids in food processing and tries to assess the levels that can survive until the final product and estimate the corresponding risk.

However, in my understanding, so far, food allergens have not been treated as a hazard with an environmental dimension. Personally, I am unaware of the potential effect of food allergens on people with severe allergy, when the contact is made through the skin (or the lungs, in case a food allergen can somehow be found on fine dust). I would imagine that brief skin contact could not easily lead to adverse reactions; but, then again, I am no expert on allergies and, thus, I might be very wrong on my hypothesis.

While I acknowledge the risks, the inconvenience and all the negative effects that allergies have on the quality of life of the people affected, I tend to believe that in severe cases, the person affected should also take his/ her condition into consideration. What exactly that may mean, really depends on the case and their doctor's advice. It could mean avoiding certain places or wearing long sleeves and gloves or a mask (i.e., means to avoid coming in contact with the allergen), carrying cortisone or antihistamine medication or even shots of epinephrine (i.e., means to fight the allergic reaction), etc.

After all, it makes sense to me that one should take reasonable care of oneself!


Thursday, 7 January 2010

Superbugs

big bug
A few days ago, just before the entry of 2010, I came across an article on the BBC News website on the undesirable effect that disinfectants may have on bacteria. The article was supporting the the incorrect use of disinfectants (e.g., incorrect dilution) could allow bacteria to develop resistance to antibiotics.

I am by no means an expert in microbiology, or even plain-vanilla biology, however, I was aware that overuse or misuse of antibiotics could lead to increased resistance to those antibiotics; a trait which, once acquired by a group of bacteria, can be passed onto others, under certain conditions. In the hospital world, where people (patients) often have a weakened immune system, MRSA is a considerable threat, while an increasing number of other pathogens (or potential pathogens) begin to exhibit threatening tolerance to the available antibiotics, turning from simple "bugs" to "superbugs".

To my understanding, the antibiotics-induced antibiotic resistance can be mitigated by a tight antibiotic-use regime. Sweden has had considerable success in tackling the MRSA problem by forcing the health care system to resort to antibiotics only when absolutely necessary. The transition period might have taken considerable time but the gain sounds considerable: they can still make good use of antibiotics that on many place of this world are now considered to be ineffective.

The article of the BBC I spoke of earlier, however, is alarming in the sense that not only antibiotics but also disinfectants (and possibly other bacteria control means???) can somehow lead to antibiotic resistance. Clearly, improper use of disinfectants, which allows for a select portion of the microbial population to survive, favours that surviving population in the sense that it eliminates the competition. I would assume that this process effectively ensures that the descending bacteria will have those gene combinations that allowed their ancestors to survive the disinfectant. It seems, if my understanding of the article is correct, that those "disinfectant-survival" gene combinations can also be effective against antibiotics.

The alarming bit is that the use of disinfectants is much, much wider than the use of antibiotics. There not only used by hospitals but by a very high number of businesses, including the food industry, and they are also at hand in the typical household. I admit it would be inconsiderate to extrapolate that all disinfectants, if misused, could lead to superbugs. The fact that the number of known supebugs is still rather a small one, while the use of disinfectants has been more-or-less systematic over the last decades would rather support that the risk is minor.

I mentioned the food industry before. Interestingly, the agro-food industry was alarmed, in the past, by the antibiotic resistance problem but managed to sort it out by adopting good livestock practices and by considerably limiting the use of antibiotics. In Europe, there is legislation in place to ensure that things stay this way. But what about the use of disinfectants? The manufacturers of such products do include instructions for use, which normally are followed. I wonder though, with the modern foodstuffs enjoying increasingly longer shelf-lives, are there any significant chances that microorganisms which find their way into foods can turn into threatening superbugs?

In "live" foodstuffs (i.e., foods that contain a flora consisting of living microorganisms, such as yogurt, fermented sausages, various cheeses, tea, etc.), which contain a small eco-system, it is likely to be much easier to keep things under control. An undesirable contamination would be worse in the case of previously sterilised (or poorly sterilised) products under packaging conditions that lack bacteria growth barriers.

In any case, and for any of the existing reasons (ranging from health and safety concerns to competitiveness and sustainability issues), it may be worth revisiting some of the practices in our every day "war" with bacteria.

The use of good practices when it comes to cleaning surfaces or when actually using antibiotics has been proven to be an effective one. The use of phages to fight off antibiotic resistant bacteria has also been tested - successfully I believe; although, I'm not sure if it can find wide scale application beyond the health sector. The manipulation of the microbial ecology could be another promising sector, which has recently re-attracted research interest; after all, it may be time to remember that microorganisms are our valuable friends far more often than otherwise (not only when it comes to the function of the human body).

(Photo "big bug", CC by G J Hutton)

Monday, 28 December 2009

Crave-Buy-Use-Discard: The life-cycle of consumer-grade technology

sunflower life-cycle
We must be living in an era of wonders. Product-wise, at least. In many parts of the world, technology has found its way to our everyday lives. From smart textiles to intelligent food packaging materials, to the fast, affordable multi-core processors, to the modern highly advanced army of mobile devices that increasingly fill up our pockets, bags and carrying-cases, there is no doubt that technology is having an impact onto the shape and capabilities of the products around us.

But does this also mean that people have a better grasp of the underlying technology behind such advancements? Are they in a position to get the most out of the modern technological wonders?

Don't get me wrong, I do not mean to criticise in a dismissive way. In fact, that is a sort of rhetorical question, since the answer - I believe - is simply "no, not necessarily". Of course, education has improved. The amount of knowledge pumped into the young individuals through the educational system has also increased. Plus, young people are incredibly good at getting familiarised with gadgets of all sorts. But at the same time the technological level required to produce many of the modern stuff around us has been going up at a higher pace thus making catching-up a challenging task.

It is no surprise that when things go wrong with modern devices, "fixing them" usually means either "open the box, replace a part, close the box" or - more often - "discard and replace". I admit that with the level of integration of the various components, doing otherwise might have been highly impractical.

What bugs me, however, is that we don't get the time to make the most out of the gadgets we buy. The actual depreciation rate is a tad too quick. Is it because of the poor usability of the devices we buy? Is it because of the market-disruptive advantages the newer devices bring? Is it because of consumers' poor understanding of the true capabilities of the things they already own? Are the price tags low enough to justify a non-conservative approach in out acquisition of technology-oriented goods? Is it simply a question of fashion?

Possibly the reason is a combination of things. If one looks at computer software, say at the office, then it is clear that the features/ capabilities offered today are much more than the average user will ever use. Interestingly for word processing software in particular, there haven't been that many features added since the old Wordperfect 5.0 (TM). If one focuses on hardware, things are even worse; CPU cores, in general, spend the majority of time idle, waiting for user input; GPUs also have managed to achieve insane processing power, that, unless somebody immerses himself/herself in the realms of gaming, goes unused. Games aside, all that power is mostly staying clear of the average consumer.


What needs to be done, IMHO, is to give technology more time to mature. I certainly don't expect manufacturers to go for that route any time soon. I do, however, believe that consumers would benefit a lot from adopting technology, with a proven relevance to their needs and a reasonable future ahead.

Would that kill innovation? Hardly. It would, however, affect the way that innovative technology looks like.

Take the small, pocket-sized, mobile 10 lumen projectors (the idea in 2008 and the user experience in 2009): The idea is cool; the niche markets are there; but - at least in my region - those products came to the market, stayed for a few weeks and then practically disappeared. Why? Was the market saturated? Were people indifferent to that new product class? I think that was because that new class appeared too early. People are still getting bulky and bright projectors for offices and the average living room is still lacking a projector; it is clear that the market is not mature.

With consumers demanding a longer cycle for the technology they use, the new products would have to focus on relevance, quality (including adherence to standards), usability and feature a serious product support scheme. Software developers (including firmware developers) would have more time to optimise code and performance and, thus, allow for a much better utilisation of present technology.

(Photo: "Sunflower life-cycle", by me)