Sunday, 25 December 2011
Sunday, 11 December 2011
|'Slippery when icy' by ksr8s |
under a CC license
'Omniphobicity' is the property that some surfaces possess to repel both polar and non-polar liquids. The former (polar liquids) typically refers to aqueous solutions and those surfaces that repel polar liquids are called 'hydrophobic'. The latter (non-polar liquids) typically refers to oils and those surfaces that repel non-polar liquids are called 'lipophobic'.
One would think that hydrophobic surfaces would tend to be lipophilic while hydrophilic surfaces would tend to be lipophobic. Well, often it is that way. But not always. The degree of hydrophobicity of a surface varies, depending on a number of factors, including composition and structure of the interfacial layers.
Interestingly, the behaviour of surfaces (or 'interfaces', to be more precise) regulates many processes in nature, including some we don't normally think about. For instance, the feathers of aquatic birds tend to be covered by a hydrophobic powder that waterproofs them, while in some cases the hydrophilicity of the feathers is controlled so as to allow some species to submerge into water.
Plant leafs tend to be hydrophobic, as they are covered by waxes. A classical example is the lotus leaf, seen in the video below:
Exploiting the properties of the different surfaces has found plenty of technological applications. TEFLON-coated pans is a humble but extremely common one. Applying coatings to surfaces can modify their behaviour, mostly regardless of the composition of the underlying material. For instance, the following video shows the behaviour of two types of wool cloth when wetted:
Hydrophobic behaviour feels a bit weird, doesn't it?
But, really, the options and possibilities are many. For instance, rendering surfaces hydrophobic, helps waterproofing and grants surfaces 'self-cleaning' abilities. Commercial products for that are in the market since several years and are extremely easy to apply (e.g., nanophos products - btw, I'm not affiliated to them).
Rendering surfaces to dislike both polar and non-polar liquids, however, has proven a bit challenging. You see, apart from achieving the desired surface behaviour, the resulting layer normally has to have some acceptable mechanical properties and reasonable endurance. Recently, a Harvard-submitted paper appeared in Nature describing the creation of 'Bioinspired self-repairing slippery surfaces with pressure-stable omniphobicity' [a more layman-friendly description can be found in the Discover magazine blog entry]. As the title suggests, the surface modification was inspired by the Pitcher plant. The authors suggest that having such a surface could be advantageous in handling liquids, medical applications, applications where anti-fouling and self-cleaning properties are needed (e.g., in the food industry).
That's certainly an interesting technology with a good potential of proving 'disruptive'. Of course, for food (and medical) applications, there are still a number of rigorous checks to make in order to ensure that the particular surface coating is toxicologically acceptable, remains stable in the range of temperature, pH, solvents and mechanical stresses used in food processing, cannot be attacked by bacteria (unlikely, since it is omniphobic) and that it doesn't break down or leak in any way to the food it is in contact with. And all that, at a reasonable cost... If it goes through, I bet we'll soon see some interesting product concepts on the market!
Sunday, 4 December 2011
|'Crystal ball take #1' |
by Isobel T under
a CC license
The more ingredients are employed in food production, the longer the route they follow before ending in a product, the more complex the food supply chain becomes, the higher the number of potential hazards. But, relax, that doesn't mean that the end-product will be unsafe. Practices are in place (and the EU those practices are dictated by law and enforced) to ensure that at any stage hazards are identified and controlled so that the final product is safe. Further to that, the Food Authorities operate sampling programmes and perform inspections so the overall risk is kept at very, very low levels. In fact, I suspect that the risk of consuming spoiled food due to improper storage or handling at home (or in mass catering establishments) may be an equally or more important factor to have in mind when discussing food safety!
Regardless of how things are at present, it is valid to say that a chain is a strong as its weakest link. With new players joining the food production and distribution 'arena', with an increasing globalisation that does have an impact on food production and on consumer trends, with the climate change very much in progress, with incidents of contamination of scale sufficient to affect the food chain, with the evolution of bacteria and other life forms that can affect food safety and quality and - altogether - with the emergence of new hazards, the quest for food safety is really still ongoing.
Still though, as every new food crisis proves, the system is not perfect. Could it ever be? I would say, plainly, no. Pardon my cynicism, but I feel it is impossible to check all food reaching the consumer for all potential hazards. And I mean really all potential hazards, regardless of whether they are considered to be reasonably expected, rather-not-expected or completely unpredictable. Oh yes, the last two are also seen for time to time. The sunflower oil crisis was something like that.
Performing an analysis on a foodstuff is normally restricted to a single hazard factor or - at best - at a limited group of hazards. The analysis for salmonella, for instance, will not highlight the presence of other pathogens, unless a specific analysis for them is also carried out, nor will it reveal high heavy metal concentration, even if that were the case. Currently, a long list of analytical methods is at hand to reliably identify the presence (and quantity) of numerous contaminants in food. But while for each used method attention has been put in ensuring specificity, accuracy, reproducibility, etc. there is no easy way in testing food samples for their safety altogether.
Is that critical? Well, maybe not yet. But it may well make sense in times of tight budgets. It may also prove useful as a line of defense against emerging hazards, including bio-terrorism. The latter sounds a bit exotic, true (although it's not entirely a new story). But it remains a concern and it's always better to be ready and alert! Going slightly off-topic, it seems that DARPA has been identifying a number of dark possibilities (some of which could also be deployed via the food supply channels) and tries to identify viable counter-measures....
Today, to my knowledge, there are no methodologies practically applied to indicate food safety altogether.
Sensory analysis can give hints on the state of foodstuffs but it would only detect hazards that would affect the sensory profile of the tested sample; the simple presence of many of the common food pathogens, the existence of chemical contaminants, etc., would normally go unnoticed.
Intelligent packaging advances have taken steps towards that direction (e.g., microbial growth indicators, time-temperature indicators, shock indicators, etc.) but their deployment has been limited, mostly because of cost issues.
A number of spectroscopy and imaging (e.g., hyperspectral imaging) techniques have been developed but they are indented for the measuring of quality parameters for specific foodstuffs, mostly where there is the necessary monetary driving force (e.g., meat quality estimation).
The combination of new-generation, biotechnology-produced sensors, employing cell membranes or living cells or enzyme-containing membranes also seem promising as analytical tools. They tend to exhibit specificity but, given their low cost, they could be assembled into kits suitable for a pool of tests on a single sample.
I believe that we should go much beyond that. We should keep existing analytical methods and work on improving specificity, sensitivity, etc. but we should also develop tools that would allow a first yes/no assessment of the safety of foodstuffs. I wouldn't be surprised if that would come together with a generous error margin, initially (false positive and false negative results).
Maybe we could get some inspiration from the IT world. Take spam filters applied to e-mails, for instance. They have been employing a number of technologies. Initially, they were looking for specific words in the subject line or in the e-mail text. Others were trying to evaluate the 'reputation' of the originating e-mail server, checking against public blacklisting databases. Nowadays, however, spam filters are much cleverer and more adaptable. They evaluate e-mails against a set of rules and the have the capacity to 'learn' as you (the user) re-classify false-positive messages that were incorrectly flagged as 'spam and false-negatives that ended up together with the normal e-mail.
Would it be possible to define a set of criteria that would allow a machine-learning system to be used for evaluating the overall safety of a food sample? For sure, deploying such a system at a world-wide level would give it a lot of data to use for refining its model (the 'learning' process). Would that make a good first line of defense? Could we devise cheap and effective methods to fill in any gaps that such a system would have?
The IT world has also been using other methods that may be of interest here. Computer viruses, for instance, have been sought for using a signature-search approach, initially. That has considerably evolved - as computer viruses became more complicated and the systems that they targeted also changed. Process emulation is one of the alternatives employed.
For the record, the spaceships of the Star Trek series were lucky enough to have 'biofilters' that could identify and remove 'anomalies' in the matter they were processing, including pathogenic agents.
Thinking aloud, I assume that the ultimate test would be systematic consumption and observation but - obviously - that is not an acceptable approach! Could we, perhaps, have an ecosystem of bacteria that could do that for us? Their survival, flourishing or decline, in other words the change in the balance of the ecosystem could give clues on the hazard factors present in the sample... Just a thought I feel it might be worth looking into....
Sunday, 20 November 2011
|'Water walker' by Navdeep Raj|
under a CC license
It doesn't take much thought to reply to that, does it?
A couple of days ago (on 18/11, to be precise), The Telegraph featured an article titled "EU bans claim that water can prevent dehydration". The article comments negatively on legislation that follows an EFSA opinion, which rejects a health claim on the potential of water consumption against dehydration. The EFSA opinion is not a very new story but it seems to have resurfaced. The said article was also in slashdot yesterday, so I assume that it has received plenty of attention world-wide by now.
Interesting article, with negative bias, regardless of the fact that both quotes and facts are provided. The article suggests that EFSA's opinion and the subsequent legislative act are really against common knowledge and are, thus, wrong. Apart from that, according to the article, the whole process has been rather expensive (for the taxpayer). Only at the very end of the text does a supportive (for EFSA) opinion appears, with no further comments given.
Well, let's see where this case stands. The claim that was submitted to EFSA for their opinion was "regular consumption of significant amounts of water can reduce the risk of development of dehydration and of concomitant decrease of performance". It was submitted by two German professors (some internet sources say they are consultants for the bottled water industry) under Art. 14 of Regulation EC/1924/2006, which covers claims for the reduction of disease risk.
EFSA said (and repeated) that the submitted claim did not meet the requirements of Art. 14 for the reduction of disease risk. The European Federation of Bottled Water seems to agree. Dehydration is a state of the body and - itself - is not a disease, although it can be a side-effect/ symptom of various diseases. I admit, however, that EFSA's opinion has been written in a rather complicated way, where they seem to somehow accept dehydration as a disease before concluding that the requirements of the Regulation are not met!!! Strange....
To make things interesting, the responsible EFSA's panel had given favourable opinions on the role of water for "maintenance of normal thermoregulation" and for it being a "basic requirement of all living things" - both claims falling under Art. 13 of Regulation EC/1924/2006, which includes claims on "the role of a nutrient or other substance in growth, development and the functions of the body". In order words, it seems plausible that the claim was filed under the wrong classification. If that was really the case, EFSA should not be the one to blame for that.
It is clear that all nutrition and health claims submitted for consideration should be rigorously processed. That 's what the law foresees and that 's what is needed in order to protect the consumer and maintain a competitive - but fair - market. Submitting obvious (or stupid) claims doesn't mean that they won't go down the processing pipeline. And although that this comes with a price-tag, there's no safe way to go around that; there is no "obviousness" clause that would allow the EC (or EFSA) to accept or reject a proposed claim.
Going a bit beyond, I really wonder, what is the reason of having a health claim supporting that water can help against dehydration? If it is common knowledge (which it is), why apply for it? In any case, the EU law would prevent such a claim phrased in a way that it would benefit a particular product, since the beneficial function is performed by any drinkable water (yes, including tap water :-)
Was it an effort to prove that the system is broken? If that was it, then point taken. And then immediately, point put aside. Every system that is open to all and is committed to dealing with all has similar weaknesses. I've got nothing against improving a system, if that is needed, but passing the obvious through formal channels so as to see what happens is a questionable practice...
Wednesday, 2 November 2011
|'Sunset writ small' by |
bgblogging under a CC license.
Photography is one of the things that has many different functions in our lives. For some, it is art. For others, science. Also, it forms a kind of expression, in a way an equivalent to speech, in the sense that it can convey messages to specific (or not-so-specific) audiences. Some consider it a visual tool merely accompanying written or oral speech. At the same time, photography is a means for art, science and communication. And, on top of that, there are the ones that embrace photography as passion.
A few days ago, I made a reference to light field photography, which seems to be slowly emerging as niche in photographic consumer-oriented products. I described it as exciting and challenging but also divergent from the traditional spirit of photography that most hobbyists and professionals carry. I now consider that I may have been a bit too harsh on that.
It's no secret that the photography world features considerable diversity: a variety of technologies are being used for a variety of applications by a variety of people. Photography seems to me as a mainstream skill/ hobby that hosts an overwhelming number of hard-to-ignore niches. Just a couple of examples I recently came across:
a. Revisiting the old times of photography, a case of which is the resurrection of instant film cameras (Polaroids). The Polaroid (corporation) having itself shifted a bit to the modern era and seeing the entire film-based world slowly making the leap towards digital media there were voices that asked otherwise. The SavePolaroid movement (archived site: here) lobbied for the preserving the option to use Instant. "It grows up with you and becomes a part of you", as a visitor of SavePolaroid.com said. I can see what she meant, although - myself - I was never an instant film user. That is passion! Now, the Impossible project offers the chance for people to meet or continue to use instant film Polaroids.
|'Lomo' by pixelfreund.ch |
under a CC license
b. "Small world" photography. To be fair, that's by no means mainstream. Capturing images from the "small world" often requires specialised equipment and some skills in sample preparation. Especially when it comes to techniques like TEM (transmission electron microscopy), AFM (atomic force microscopy), BAM (Brewster angle microscopy) or - even - confocal microscopy, one needs specilised equipment that is (very) unlikely to be found outside the lab walls, in the hands of hobbyists. An encouraging exception to the rule has been a recent boom in the marketing of USB microscopes (such as VEHO or Reflecta), although I tend to believe that the trend doesn't persist much anymore.
Photos from the small-scale world, however, always attract attention. Be it insects, snow flakes, bacteria, crystals, phases of matter or molecules, the images of the world at such size-scale have always been associated with a certain kind of "cool factor". There several interesting sources out there. Apart from what one can find in Flickr or Picasa, Nikon "Small World" is certainly worth a visit. It is a corporate-supported website (Nikon Instruments) hosting several galleries with photos from the "small world", which were selected by open competitions. In most cases, the photos there are accompanied by (brief) information on the sample and the technique used to get the picture. As an example, a favourite of mine:
Wing scales of Urania riphaeus (Sunset moth) (100X), |
available in the "2008 Winners" gallery of Nikon "Small World"
I guess that the bottom line is that the photography scenery is - fortunately - beautifully complex. It's certainly unlikely to feel bored there!
Tuesday, 25 October 2011
|'1934 Kodak Brownie|
Hawkeye 2A vintage
camera' by Kevin Dooley
under a CC license
Manipulating light through pinholes or lenses has been known since the BC era. Finding a way to 'freeze' light on a piece of film proved to be a bit more challenging. The first useable form of photography - as an innovative technology - came up at about 1820. The next few decades were certainly exciting with huge steps towards better equipment and superior consumables. Technological progress and consumer demand went hand-in-hand for several decades. Even a few years ago, just before the dawn of the digital era of photography, cameras and films were practically for all, available in all sorts of flavours and costs.
Regardless of the technological advances, the main idea has remained mostly the same since the early photography days: Collect light from an object/ person/ scene, drive it on a photo-sensitive surface and capture the moment! Even with the coming of CCDs, which eventually made digital cameras possible, the idea has remained unchanged; it is just the film that has been put aside. (Edit: When it comes to photography and the corresponding equipment, people often like retro-looking technology.)
Around that main theme, a number of variations have developed. Different kind of lenses, numerous filters allowing for all sorts of visual effects, software that enables post-processing with - practically - no limits, etc. People have even looked at how things look outside the narrow limits of (our) visible light spectrum; infrared and ultraviolet photography are niches that still maintain their audience and are always associated with a certain 'cool-factor' (e.g., common things in the IR and a more structured approach in UV/IR photography).
A much less known area of photography is 'light field photography'. Putting science aside for the time being, the idea is somehow different than classic photography: instead of getting a single projection of rays of light on a plane (be it a film or a sensor) let's get more information about the light received by the camera, i.e., not only intensity and frequency (colour) but also direction. Having captured an instance where the received light rays have been 'better documented' makes it easier to manipulate that instance after its capture, changing, for instance, the focus point or altering (slightly) the view point.
Stanford university hosted quite a lot of work on light field photography. It worth visiting their webpages, e.g. http://graphics.stanford.edu/papers/lfcamera/ (there is a nice, illustrative video at the bottom there). Ren Ng, one of Stanford's researchers has started his own company using that technology, Lytro. Lytro has made quite an impact on the photography press lately by launching a camera with the capability to focus after the fact.
Promotional video of the Lytro camera
Sample photo from the Lytro website. Click on an area of the photo to refocus.
Now, personally, I find both the science behind and the application quite exciting! Despite the fact that some experts were rather critical on the particular implementation (e.g., Thom Hogan's blog). And no, I believe that Lytro is not the first plenoptic camera that reaches the market (e.g., Raytrix GmbH), although it does come in a very consumer-oriented form.
Myself, I find the whatever technological or practical constraints bearable. For instance, the resolution offered is likely to be quite far from what the current dSLR or prosumer options. Also, merely viewing lightfield photos requires proprietary software and so does sharing such photos. But still, it's the new thing around. It may feel clumsy and strange but if it stays around long enough, it is bound to improve!
However, I admit, it sort of beats the purpose of getting photos in the first place. Yes, it still allows you to 'capture the moment'. But it takes away the magic of finding the right angles, focusing on the spot that highlights your point of view behind the photo. It is basically about the same debate around video vs. photography. (Edit: for those of you who wonder, light field video does exist - e.g., http://pages.cs.wisc.edu/~lizhang/projects/lfstable/ - yet not in a commercial product AFAIK; having a light field video camera allowing for ex-post manipulation of the output with respect to POV or focus would be cooooool, too.)
I think that we are about to see plenty more developments in the world of image capturing and processing.
BTW, just before closing this post I can't resist saying that, yesterday, I saw at Slashdot a link to Kevin Karsch's site on 'Rendering synthetic objects into legacy photographs'. I find pretty amazing what they have managed to accomplish. Also a bit scary. Here is a video they have made available:
Rendering Synthetic Objects into Legacy Photographs from Kevin Karsch on Vimeo.
If we keep on that pace of development, I - sometimes - wonder how much more innovation can we possibly accommodate :-)
(Note: I'm not affiliated to any of the companies mentioned above. This is not a product review - I neither own nor have access to any of the light field cameras mentioned.)
Sunday, 11 September 2011
|'Zombie walk 2010' |
under a CC license
The story was based on a paper of Cryan et al. in PNAS, which describes the effect on the behaviour of mice when fed with feed supplemented with Lactobacillus rhamnosus; altogether, mice became more relaxed. Although that's not the first time that gut bacteria have been shown to have an effect on the mood of mice in vivo, this time the impact to the public seems to be higher.
Symbiotic relationships in living organisms is nothing too uncommon. That applies to both mutualistic and parasitic symbiosis. Although 'mind control' cases have been known, especially in parasitic symbiosis, it wouldn't have been easy for me to imagine that the same would apply to a mutualistic symbiosis, especially if that was taking place in the gut.
The 'mind control' cases one would imagine that they should involve an organism with direct access to the brain or, at least, to the bloodstream. The infection of the bullet ants from cordyceps is an example. The fungus forces the infected ant to climb upwards and firmly grab itself. There, the ant will eventually die and the ascocarp (the fruiting body of the fungus) will come out of the ant's head.
'Mind control' can be used by insects which want to lay their eggs on their ideal host, too. For the orb spider, for instance, the nemesis is the pompilid wasp, which temporarily immobilises the spider, lays an egg on it and let the larva do the rest. The larva sucks nutrients from the spider and, when the time comes, chemically instructs the spider to alter its web in such a way that it can support the cocoon that the larva will later on make for itself. Needless to say that the orb spider doesn't survive the process and becomes dinner, after all. (the video below shows the process - the action starts from about 03:00)
An even spookier approach is practiced by the Ampulex compressa wasp. That uses cockroaches to lay its eggs onto. To manipulate the cockroach, the wasp injects, in series, firstly a temporary numbing agent in the cockroach's brain and, then, a chemical that blocks its escape reflex. After the process, the cockroach is alive and well (not for too long though) and follows the wasp's will. The end is bitter in this case, too, as the larvae will consume the cockroach in the process, starting from its non-vital organs.
Snails, too, can host parasites. (I found a link to the video below at http://primesurrealestate.com/2010/04/mind-altering-parasites/).
For humans, the list of parasites is not too short, either. But I am not aware of any zombie-like mind control bugs. Yes, toxoplasma gondii can alter the behaviour and behavioural characteristics of people, affecting males and females in different ways but not in the grotesque way that bullet ants are controlled by the fungus. Still though, the effect from toxoplasma might be responsible for the macroscopic properties of societies around the world, taking into consideration how widespread toxoplasmosis is, although other factors are likely to exercise far higher influence (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1635495/).
A number of diseases are also known to affect the mental processes, usually messing with the brain tissue (e.g., Alzheimer's, syphilis, etc.) but such changes - I suppose - are non-reversible.
Going back to the story with the mice and lactobacillus rahmnosus, the beauty in it is that the effect is 'subtle'. Measurable and real but mild and reversible. And that shows a lot of potential to be explored on the use of probiotics not only for the protection of the gut's normal function but also for the delivery of 'brain-related' interventions. What makes things even more interesting is that living bacteria are adaptable and 'intelligent' in the sense that they may be able to perform their 'mind-controlling action' (e.g., excreting a cocktail of chemicals) only under the right conditions. Just imagine mitigating the stress feeling by adopting a diet rich in probiotics - e.g., within fermented food - with the ability to respond to in-gut stress markers. It is so much easier than having to take pills and the fact that their action is subtle may allow individuals to also train themselves to feel less stressed.
It seems, after all, that diet does have the potential for an even greater impact on our lives...
Sunday, 26 June 2011
|'365.29' by jessyroos |
under a CC license
It is no rocket science that a reduced income leads to tighter budgets. So, will that affect the way we eat? Simple answer: Yes!
The decrease in the income of most people is larger than the expenses that one would call as "luxury". And if one puts aside hard costs, such as house mortgages, car insurance, etc., then whatever is left is shared among the expenses for food, health, clothes, transport, utility bills, education, etc. Taking into account that the expenses for food rank high on the household budget, it is easy to see why the way we eat is likely to be affected. This is a case of food crisis such as in other parts of the world, where the nutrition challenge has long been identified, but still it calls for careful thinking and concern.
Private label products sell increasingly well in supermarkets (the link is in Greek - Google translation available here), restaurants see a decline in the number of patrons, fast food chains and coffee shops introduce "deals" in their menus and so on. Unless the cost of all food ingredients drops accordingly, the temptation of an increasing number of people to choose food solely based on price is a risk.
Food safety is a legal requirement, so I wouldn't worry to much for that. But what about food quality? What about nutritional content?
The risk is known. As food is associated with health, eating bad will - at some point - lead to health problems. I'll skip the part where I say that addressing health problems costs money - to the individuals affected, as well as to the healthcare system. I'll just state that eating well (i.e., healthy) is need - not a luxury.
So, the challenge here is to ensure that affordable food is - nutritionally - good food. That isn't necessarily too hard to do. And there are ways to make people more aware of that. For instance:
- We should be encouraged to cook more, using good ingredients and following a balanced diets. Bringing friends at home and cooking, instead of ordering pizzas could be an idea. Apart from eating better, it could also improve our quality of life in other ways.
- We should take the time to have a look at the nutrition labels of foodstuffs we use. And, yes, we should try and understand what they tell us. Have a look at here (if you live in the EU) or here (if you live in the US).
- We should encourage competition amongst economic operators of the food market, making sure that the bad ones get the message and rewarding the good ones with our trust.
Saturday, 18 June 2011
|'Avocado snack' by Voxphoto |
under a CC license
The more consumers become aware of the links between food and health, the more active the triangle of the food market, scientists and policy makers becomes.
In Europe, stakeholders of the food world are already engaged in the discussion on the nutrition and health claims that may appear on foodstuffs. An EC Regulation is in place (EC/1924/2006) but essential elements of that Regulation are still in the making. Such elements include the lists of health claims, which will define the claims and the exact conditions under which they can be displayed on a foodstuff.
Another element, quite important, is the definition of the nutrient profiles, which will make a food eligible to bear claims. Nutrient profiles are being worked on by EFSA experts; what makes it interesting, is that those profiles are, in very simple terms, an effort to determine whether a food is 'good' or 'not so good' and allow claims to appear only on the 'good' ones. Doing that, of course, is not an accurate science but it does rely on effectively summing up whatever established scientific facts on food and nutrition exist. In some places, nutrient profiles are already present and are taken into consideration in the advertising of foodstuffs - though mostly on a voluntary basis.
The food labels are likely to change yet once more in the future. In Europe, the GDA labelling (an industry-supported voluntary nutrition labelling scheme) has gained plenty of momentum. In the US there is the 'Rethink the Food Label' effort, which leans onto the public to put forward proposals for a better label. I can't predict what the outcome will be. Personally, I would prefer scientists to strongly pump input to the process. But I do see that food labels should make the most that consumer perception allows for.
|USDA - ChooseMyPlate.gov|
Sunday, 29 May 2011
|'October 2010 Alaskan|
Viaduct Closure' by
WSDOT under a CC license.
...is plenty of things. Clarity and simplicity? Peace on earth? Food for all? Money? Ideas? All of those things? Which ones exactly, depend on your point of view but, really, I don't think there is a right or wrong answer here. The world is a game of many players, where all can do their bit to influence the result.
A friend send me a link to Architectes de l'urgence the other day. I felt surprised. Even though I am an engineer myself. Even though I am aware of the contribution of engineers in many places around the globe, where the need for (re)construction exists.
I felt surprised possibly because in our everyday life, houses, roads, manufacturing plants, schools, office buildings, etc., are taken for granted. I felt surprised simply because there is nothing hero-like in the view of an engineer. And it's not just me, I believe. Many have heard of Doctors without borders - but that is human health we are talking about. Don't worry. I won't go to claiming that we should reserve a cheer for engineers. But I feel like reminding me (us) that what the world may truly need is expertise.
Expertise. Expertise in construction, IT, medicine, agriculture, education, food processing, energy, etc. Expertise on all those things that are the structural elements of modern life as we know it. That is the thing that can make the difference. And, indeed, you can't have development without the right minds (and hands) in place (and in the right order). Even when you achieve development, you still need the right experts to ensure sustainability.
That's certainly not a personal discovery of mine. Generating or enriching in-house expertise, attracting the right people, achieving the right level of education, etc., have all been in the competitiveness/ innovation agenda of communities (countries and regions) for quite some time now; take the Marie Curie schemes as an example. But still, I find that, as a priority, it tends to fall under the radar quite often - possibly because expertise costs, without leading to direct profits.
I believe that investing in expertise, preferably in a sustainable way, needs to stay on table, especially in times of crises. In the same way that expertise should be part of any emergency aid package, be it a response to a natural catastrophe or human destruction.
Sunday, 3 April 2011
|"System power on off switch"|
by Dhanu under a CC license
It's that kind of problem that is simple to understand, quick to realise and easy to solve. But it's annoying. Very annoying. And it really makes you wonder: why cars - all cars - can't have battery indicators; the kind of battery indicators that actually indicate how much juice (and life) the battery still has.
Yes, I know, top end models have it. Newer models, from 2006 onwards, are increasingly featuring things like that. But why does it have to take soooooooo long to have things like that in a car? The charge level indicator is something that every consumer appliance with a rechargeable battery has been equipped with the last few decades. It's nothing too sophisticated. It's an old idea. And if one thinks of modern consumer electronics, such as mp3 players, laptops, etc., one can see that, nowadays, there are much more sophisticated - and precise - ways to predict remaining battery usage time. Judging from the prices of such devices, the electronics in question can't be too expensive.
So why does the car industry ignore that? I understand that a flat battery happens once every 3-4 years but it is still annoying and as a problem it seems extremely easy to prevent (two factors involved: low amount of energy stored in the battery and/or reduced capacity of the battery to store energy). True, the car industry would need to adapt the circuits in question to the specifications of the battery and the consumption patterns of their cars. To be fair, that does involve some work, since - unlike mp3 players - cars have a very wide spectrum of power needs. The immobilizer circuit, for instance, doesn't take too much power to run, but the ignition on the other hand, does. Also, it's the temperature span the battery operates under. Most mp3 players spent their working lifetime indoors or in a pocket heated by the body temperature; car batteries are not that lucky. And on top of that, the automotive industry typically needs to test everything for reliability, both individually and as a whole. But still, how hard can it be?