Thursday, 31 December 2009

Monday, 28 December 2009

Crave-Buy-Use-Discard: The life-cycle of consumer-grade technology

sunflower life-cycle
We must be living in an era of wonders. Product-wise, at least. In many parts of the world, technology has found its way to our everyday lives. From smart textiles to intelligent food packaging materials, to the fast, affordable multi-core processors, to the modern highly advanced army of mobile devices that increasingly fill up our pockets, bags and carrying-cases, there is no doubt that technology is having an impact onto the shape and capabilities of the products around us.

But does this also mean that people have a better grasp of the underlying technology behind such advancements? Are they in a position to get the most out of the modern technological wonders?

Don't get me wrong, I do not mean to criticise in a dismissive way. In fact, that is a sort of rhetorical question, since the answer - I believe - is simply "no, not necessarily". Of course, education has improved. The amount of knowledge pumped into the young individuals through the educational system has also increased. Plus, young people are incredibly good at getting familiarised with gadgets of all sorts. But at the same time the technological level required to produce many of the modern stuff around us has been going up at a higher pace thus making catching-up a challenging task.

It is no surprise that when things go wrong with modern devices, "fixing them" usually means either "open the box, replace a part, close the box" or - more often - "discard and replace". I admit that with the level of integration of the various components, doing otherwise might have been highly impractical.

What bugs me, however, is that we don't get the time to make the most out of the gadgets we buy. The actual depreciation rate is a tad too quick. Is it because of the poor usability of the devices we buy? Is it because of the market-disruptive advantages the newer devices bring? Is it because of consumers' poor understanding of the true capabilities of the things they already own? Are the price tags low enough to justify a non-conservative approach in out acquisition of technology-oriented goods? Is it simply a question of fashion?

Possibly the reason is a combination of things. If one looks at computer software, say at the office, then it is clear that the features/ capabilities offered today are much more than the average user will ever use. Interestingly for word processing software in particular, there haven't been that many features added since the old Wordperfect 5.0 (TM). If one focuses on hardware, things are even worse; CPU cores, in general, spend the majority of time idle, waiting for user input; GPUs also have managed to achieve insane processing power, that, unless somebody immerses himself/herself in the realms of gaming, goes unused. Games aside, all that power is mostly staying clear of the average consumer.

What needs to be done, IMHO, is to give technology more time to mature. I certainly don't expect manufacturers to go for that route any time soon. I do, however, believe that consumers would benefit a lot from adopting technology, with a proven relevance to their needs and a reasonable future ahead.

Would that kill innovation? Hardly. It would, however, affect the way that innovative technology looks like.

Take the small, pocket-sized, mobile 10 lumen projectors (the idea in 2008 and the user experience in 2009): The idea is cool; the niche markets are there; but - at least in my region - those products came to the market, stayed for a few weeks and then practically disappeared. Why? Was the market saturated? Were people indifferent to that new product class? I think that was because that new class appeared too early. People are still getting bulky and bright projectors for offices and the average living room is still lacking a projector; it is clear that the market is not mature.

With consumers demanding a longer cycle for the technology they use, the new products would have to focus on relevance, quality (including adherence to standards), usability and feature a serious product support scheme. Software developers (including firmware developers) would have more time to optimise code and performance and, thus, allow for a much better utilisation of present technology.

(Photo: "Sunflower life-cycle", by me)

Friday, 18 December 2009

The "I'm doing something" revolution

idle - landscape photo
I use the metro every single weekday to go to work. About a third of the passengers I see are listening to music from their mp3 devices (or at least pretend to do so). Another third is reading a book/ magazine/ newspaper (or pretend to be reading) or checking/ composing SMSs. From the remaining third, some chat with each other, some talk on the phone and some - few- seem to do nothing. Being a member of that last group, I can't help but wonder: Are idle moments an occasion under extinction? Are they just "out of fashion" or do they really correspond to a change in habits?

The is no doubt that we live in a world full of stimuli. In the typical everyday life, at any given moment of the day, a great number of things are competing, intentionally or unintentionally, for our attention. The kettle whistling sound, the phone ringtone, the flashing lights on ambulances/ police vehicles/ etc., posters and billboards, traffic lights, the music from the media player of the guy next to us, the chat of the couple waiting at the bus stop (if it survives the urban background noise), the notifications that pop up every so often on the lower right side of a windows desktop, the voice of the boss(/wife), etc. Could it be that people have forgotten what "peace and quiet" once used to mean?

To be fair, what I'm describing might be a big-city only epidemic. When one leaves the urban environment and goes to the countryside things, often, feel slower. That always "doing something" state seems to be highly addictive (well, you could also call it "habit", I guess).

I vividly remember, about 2 years ago, going for a meeting to the north, together with colleagues; it had taken us quite a while to get used to the pace of life there. Initially, I felt things where dragging for ages; ordering and having a coffee served was a descent, frustrating, 10-15' case... It took considerable self-discipline and patience to keep my cool ! :-)

It seems to me that we, people, are fully capable of creating entirely artificial environments, based on "I'm doing something" and of "I'm still doing something" individuals. Having said that, our brain is good at establishing a background of stimuli, regardless of density of those stimuli. But, does this come with a cost? And, if yes, is that cost worth it?

The density of stimuli is associated with the rate of development of babies and children. The mental activity throughout one's life has also been associated with people's (mental) health at the later stages of their lives. So does this mean that the busier generations will live a better (or longer) life? Are those generations more efficient thinkers? Or is there a fine line, beyond which the information overload can have a negative impact on people?

In nature, being idle is the "energy-saving mode" of living organisms, thus extremely important for the balance between organisms. But since "energy" may not be too important in some parts of the western world, I wonder, do brief idle periods carry any positive content or not? Has "doing nothing" (in the awake state), perhaps, the potential to help creative thinking and creativity, to allow people to mentally explore other alternatives or to come up with fresh ideas? Could it have any impact to the way that people interact with each other?

Too many questions for which, really, I don't know the answer.... The only thing I can say is, simply, that from time to time I do enjoy doing nothing for a while :-)

(Photo: "Autumn at Idle Creek", by J. Heffner, / CC BY-NC-ND 2.0)

Monday, 14 December 2009

Usability: The elusive daemon of innovation

Complicated cabling
Have you ever found yourselves in the position to get a cool gadget (or product) on your hands just to find out, minutes or hours later, that it is annoyingly difficult to use? Have you experienced that transition of feelings, from excitement to frustration? You don't have to be a gadget freak or an "early adopter" to have found yourselves in that unfortunate position; it happens to most of us, typical consumers.

The problem with usability is that it is an elusive concept. To make something innovative, you just need to push things a bit forward (by featuring new technology or a new working principle or a new application, etc.). But for that innovative something to become a success, amongst other things, it must also be usable. People need to feel comfortable using it. And no, people don't like reading manuals.

While a growing number of technical standards exists, which make things much more straight-forward for manufacturers, there are still plenty of traps to fall in. Take for instance the option menus of digital cameras. Some are definitely a bit "weirder" than others. Or think of the vast majority of the mobile mice, which while advertised as being fully functional, space-saving cousins of the normal mice, they often turn out to be difficult to guide and tiring to use. And what about the early GPS-enabled PDAs, which had displays that were simply not bright enough to be used in daylight (or even if they were bright enough, they couldn't stay away from the charger for more than 90' or so). There are even some food products (e.g., dehydrated sauces in powder form) that have failed to get consumer acceptance simply because they included non-obvious steps in their preparation.

There is no secret that successful innovation must be able to take advantage of human intuition and/ or common sense. Exceptions do exist (e.g., specialised equipment for industrial/ medical/ etc. use), but usually even there, a clever, intuitive design is a plus, simply because it simplifies the learning process and reduces the chances of human error. If you have some time, you might want to visit the Media Lab of MIT for some refreshingly simple ideas.

To be fair, though, usability is no easy task to master. Different cultures, different tastes, and that "cursed" characteristic of humans to learn and adapt to most things, which will ensure that at least a handful of people will feel comfortable with any given new product, thus misleading its designer....

Fortunately - for us end users - the product market ecosystem evolves somewhat similarly to the natural ones: the fittest products spread more, persist for longer and inherit some of their characteristics to their derivatives. And even, it is those products that set the requirements for the products or services to come.

[By the way, my first blogspot template was taking advantage of my full 1680x1050 desktop resolution... until I remembered that the typical netbook is restricted to 1024x600. Good thing I'm a netbook owner, too!]

(Photo: "Complicated cabling", by me.)

Sunday, 13 December 2009

Hearth, Fireplace, TV and Facebook

Fireplace photo
Once upon a time, families would gather close to a house's hearth to cook, eat, warm themselves, talk and socialise. Years passed and the custom still persisted. The hearth managed to cover so many human needs at the same time that was a hub of the household. Ancient Greeks even had Hestia (Εστία) - a goddess - for that.

Much, much later in history, when houses became bigger and the household functions could be separated, it was the fireplace (or the gas heater) that became the centre of a house's social life (at least in winter).

And then, came the TV. It didn't (and it doesn't) cover the heating part, which is why TV sets and fireplaces often shared the same room, but it did (and it does) cover the story-telling/pseudo-social part. The TV sets became really quickly the household animators. Numerous voices have critised the effect of television on human society, although - to be fair - the television is also associated with a number of exciting possibilities, making it one of the most disruptive technologies, equivalent to the written languages, the typography or the radio.

In our digital age the traditional TV faces a number of competitors: The internet gives the potential for interaction, communication as well as information and entertainment. More and more parts of the world gain access to the digital world. More and more people acquire a digital presence. An e-mail address at first, a blog screen name at a second stage, a youtube alias maybe, an avatar on an online game, or even a piece of property in a digital world.

The social life, as most of us know it, had a clear impact on the digital world, from its early days. As "expected" (but that is said retrospectively) when people got the tools to reproduce their social links in the online world, those tools became massively successful (see irc, myspace, orkut, facebook, tweeter, picassa and flickr and - of course - the various blog- and forum- hosting sites - to name just a few).

Now, it has started working the other way round. Our digital presence is beginning to become the point of reference. It is our online profiles that are up-to-date and it is those profiles that maintain the link with the people of our social circle. Do you want to get to know somebody? Meet him/ her online! Link your digital existences!

Don't get me wrong. There is no dismissive criticism behind my words (questions and surprise, maybe). After all, just a few years ago, a person's social toolbox would include the telephone (still on the list), letters, postcards and even the local press. When "free time" is currently under redefinition, employing IT for a similar purpose seems the natural thing to do...


(Photo: "Hearth & Hound", CC by Woody)

Saturday, 12 December 2009

Technology, free time and creativity

Amstrad CPC 464
Technology - in the western world - has always been advertised as the means to make our life easier, simpler and "better". What that "better" actually means, is free for your interpretation. Often, the appealing thing for new technology is that it saves us time. And that is a cool thing. Because when you save time, then you have more time available for other stuff, you can do more things - if you need or want to, or you can have more free time...

So the first naive question pops into my mind: Do people today have more free time?

No, at least not the people I know of. In fact, if I may compare between generations, I tend to think that people work harder (and/or longer hours) now compared to what people tended to do about 20 years ago. Of course I don't have any solid data behind that statement - it is just a subjective impression.

Second naive question: Are people more creative today?

Now that is a challenging question. Consumer goods of the digital age have certainly made a number of things accessible to a large group of people. Take the digital cameras for instance; photography and filming are not the expensive, semi-elitist hobbies that used to be, not many years ago. Computers also: when I was a kid it was the 8bit age, with the early, expensive, beige "PCs" hosted only at the most "cutting-edge" offices and the more affordable, but odd, smaller personal computer devices (ZX Spectrum, Commodore 64/128, Amstrad CPC, AMIGA - a bit later on, etc.) being adopted by the nerd minority of the time. Now, computers have proven to be not only excellent companions at the work life but also a very enabling means for a number of things beyond the office life.

So are people more creative today?

Looking once more at the people I know of, I have to say, rather not. Of the 20-30 people I brought to my mind about half regularly use their digital camera in some social occasions only, but they would also do that with an old-style camera, if they had to. Of that group about 5-6 have a blog, or maintain a relative active profile on a social-network site. But, again, those people a few years ago used to keep diaries or contribute to newspapers and magazines.

So is that a problem of the people themselves?

No, that would be too pessimistic to say. I think it is a question of free time. Technology has allowed for a lot of potential to be at a "hand's reach". What one needs, it the sufficient time to interact with it and learn to channel that potential to something that, sooner or later, hopefully, we 'd call "creative".

Critics would suggest that free time and competitiveness don't go together. I disagree. Free time can lead to fresh ideas. And it is fresh ideas that will push us forward and make our lives better. Google for "personal projects" around and see what people around the world have made. Ok, some of it may look junk but there are things that stand out (if I recall correctly, google itself started as a personal project, icq also and so did numerous other things). And keep in mind that there may be much more beyond what an internet search engine can reveal...

(Photo: "My second computer", adactio / CC BY 2.0)

Tuesday, 8 December 2009

I talk about innovation, you talk about innovation, but who is the one who innovates?

I was at a project meeting today, attended by our consortium members, all of which have something to do with innovation, business support or research and technology. Our objective (and biggest worry)? How to boost the rate of innovation uptake by companies. which translates into convincing companies to care about innovation.

You see, in principle innovation is considered to be associated with many good things, plus it sounds cool. But also it comes with a price tag. Understandably, in difficult times, like the ones we live in, investing in innovation is counter-intuitive. The instinct of survival kicks in: reduce costs, be vigilant on sales, brace yourself for further financial trouble.

Do tools exist to allow margin for (or simply, to make easier) investment on innovation? (Tools, like those simple methods to evaluate the knowledge potential of innovation or even schemes for aiding networking and dissemination like the Enterprise Europe Network, come to my mind). Is this the right time to test for a new technology?

These are the times when innovation goes high up on the business' wish-lists. This is particularly true for innovation that leads to higher productivity or lower production cost. I do believe, however, that we - at the country level - should be more determined to financially fuel innovation, although, I am aware of the risks behind such a move. While the move for supporting the financial backbone through the monetary support of a number of banks has been the typical choice across the globe (not an irrational thing to do, in any case), putting on the market additional funds for applied research or innovation would strengthen the knowledge-dimension of the economy and could potentially improve the recovery rate and lead to new advantageous assets (read: jobs, sustainability and future).

(photo: "PL-Gdansk/An Industrial SkyLine (2)", CC by Wilfrid)

Sunday, 6 December 2009

Retro innovative technology (think Polaroid[tm])

When I was thinking to start this blog, a few weeks ago, I had in mind to avoid referring to real life products, companies or people. I thought - and still believe so - that, unless you refer to things really close to you or directly under your control, it is not too difficult to get the facts wrong. Since then, however, I have realised that it is such things that act as "inspiration" to talk about innovation. Thus, starting today, I'll be less strict, allowing myself to share, now and then, a few personal thoughts on "innovative" (or "anti-innovative") products/ companies/ practices/ people that have managed to attract my attention somehow.

(Note 1: This is neither a product review nor a sponsored entry; I am not affiliated to the manufacturing company or any other party involved in the marketing of the products below.)

(Note 2: every company name, product brand or product name mentioned below, is property of their respective owner)

On the point now: I recently got into my hands on of those photo-printers for home use. Yes, there are plenty of those. However, I'm referring to the Polaroid PoGo CZU. That is a very special product, which I'm not quite sure if it should be marketed exactly as a photo-printer. Yes, it prints photos from cameras and mobile phones (or other devices that can support PictBridge) through a USB or Bluetooth connection. BUT: You can't connected to a PC, the printouts are fairly small (2''x3'' or approximately 5.1x7.6cm), a bit bigger than most business cards and you can only use the ZINK paper packs from Polaroid.

Having said that, the PoGo printer if FUN to use. It is the only mobile photo printer I am aware of, with dimensions similar to a small external hard drive (weight: 220g), able to fit in your jacket's pocket, having a rechargeable battery that can print about 15 photos per charge. Not to mention that the only consumable it needs it's the paper (the paper compartment can fit 10 sheets) - no ink cartridges needed, as the dye crystals are embedded on the paper.Ah, and it looks sleek (I used the black version - there is also a pink available, which would not exactly fit my taste is colours)! Cost-wise, the printer sells at about EUR 50-55 and the cost per printed photo is about 29p. For a bit more than double the price you can find the faster and much more versatile - but bulkier, non mobile, non cordless - Canon Selphy 780 (cost per 10x15cm printout is approx. 31p), although you may be able to find other similar products at lower prices (especially if you look for discontinued models).

Well, you may say, it is a Polaroid after all, which had a major breakthrough a few decades ago with the instant photo cameras. In the modern era, where the average digital camera owner may shoot over a few hundreds of photos per year, most of which never make it onto paper, a product like the PoGo printer can bring fun to places where it 's difficult or inconvenient to share digital content. The photo paper used features also a shelf-adhesive side, so printouts can be used as stickers. Think of parties, think of decorating notepad covers or backpacks or boxes, etc. Think of holiday time, where printing a couple of shots on the spot could be a good laugh (yes, you can share the 10Mpixel quality original when you return but that's a different thing).

What I find "refreshingly innovative" in that product is that it managed to merge some modern technology with nostalgia.

Do I need to close this entry with advice? Well, the bottom line is sort of obvious: If you need a photo printer for quality printouts at a variety of sizes or on different paper qualities, having fast printing speeds and or if you normally have the digital photos stored in a computer, please look elsewhere: Most inject printers can achieve good photo quality print outs, most multi-machines support PictBridge, and an increasing number of photo-specific printers is hitting the market. If you think that getting small, easy to share, stick-able printouts on the spot from your digital camera or your mobile can be fun, even when you are on the move and away from normal digital conveniences, then go ahead and have a look...

(photos: "Polaroid One-Step", CC by SqueakyMarmot; "Polaroid PoGo being used at its finest", CC by Inhisgrace, respectively)

Thursday, 3 December 2009

Using general models to solve local challenges... Is that feasible?

Lately, I've been working on a project on knowledge transfer. The project covers a number of countries in Europe, mostly in South- and Central- Europe. The most important challenge for the project is to create an effective methodology for potential knowledge users, which, if followed, would allow them to get a pretty accurate first impression on the potential "value" of knowledge that they plan to invest on.

Obviously, there are many factors that affect the "value" of knowledge. To start with, the term "value", in this case, refers to a subjective value, which takes into consideration the skills and previous record of the specific potential knowledge user. Then, there are numerous parameters that cover aspects like the interlinks of the new knowledge with existing know-how in the company, its proximity to the current market of the organisation, its stage of development, the investment required to adopt it and introduce it to the production process, the running costs and the expected income, the new skills it may require to use, etc. But the hardest part of all is to accommodate for the different practices of entrepreneurs in the various countries the project covers and the differences in the quality or quantity of the input that one can expect.

Companies of similar sizes operate in different ways across countries. Talking about the food sector, good networking with researchers, for instance, is more likely to be the case in the north of Europe than in the south. Risk capital is also, typically, harder to get in the south or east of Europe than in the west. But also the consumers behave differently, which makes the local companies to have somehow different priorities.

At this point, I think that the one-size-fits-all policy is hard to be applied to real life challenges such as knowledge-transfer involving small or medium enterprises. What I think is feasible, is to create a model, which can then be tailored to the needs/ practices/ preferences of its potential users at the local level.

(photo: "Blue Porcelian Globe Illustration from 3D Clay Model", Arctotraveler)

Tuesday, 1 December 2009

innovation = success ????

Does innovation lead to success? You don't need to be an expert on that. The answer is a clear-cut no. In fact, success doesn't even have to rely on innovation. When one focuses on consumer-relevant goods - foodstuffs included - it is the consumer perception that makes the difference.

Think about tea as an example. I can't speak for the entire world but I believe that if you happen to leave in an urban location, somewhere in Europe, your major local supermarket are likely having at least 3-4 different options. Some play with different flavours (Earl gray, vanilla flavoured, citrus flavoured, and so on) or different kinds of tea (green, black, oolong, i.e. products that have undergone fermentation under different conditions). But some play with "innovations" that claim to have an impact on the final product quality and - in the end - on the consumer experience out of the product. So, for green tea for instance, one may be able to find products in different kinds of sachets (square, pyramid-shaped, tight or spacious, paper-based or fibre-based, etc.) and different kinds of packages (having all sachets in the same compartment of the packaging or having each sachet individually packed in a paper pocket or in a sealed tin foil envelope, etc.). Each alternative may claim a more or less credible virtue. For example, sealed tin foil packaging may be good in keeping the tea flavours in place, till you open the packaging. Spacious sachets may allow for a quicker extraction of compounds from the tea leaves matrix. But there is no clear winner, not because the differences are impossible to measure, but simply because it doesn't matter much: Some people can't tell the difference, others focus on price rather the quality, others feel that they deserve the nicest looking product, etc. That is why marketing departments are busy!

Small companies, sometimes get obsessed with a single characteristic of their product. Regardless of how good a job they do, the acceptance of their product will not depend on a single characteristic. And even if one does find consumers that focus on a single characteristic, not all of those consumers focus on the same single characteristic.

A while ago, I came across a snack producing start-up, which was promoting some new snack flavours. They had a comprehensive range of products, new for the market. For me as a consumer, their taste/ flavour combinations were interesting, the texture was familiarly crunchy, there was a bit of health-talk supporting the product on the label, the packaging was comparable to the competition and the price was ok. The problem was that for said products, the tin foil packaging, typical of many snack products such as crisps, was not protecting the contents sufficiently. 2 out of 3 times, I would end up opening a package of snack debris, instead of the normal product. I assume the product designers never considered the stress a product can endure up to the supermarket shelf. It took a while for the problem to reach the ears of the right people. And while they did act upon that, changing the packaging to something more appropriate, that lag time between the appearance of the problem and its solution may have been crucial.

Bottom line: If you are working on something innovative, don't forget to have a look at the big picture before launching it to the market. The consumers won't necessarily appreciate the same qualities as you do. Actually, they might even fail to see them. If you feel biased (don't get fooled, the creator is always biased) ask a friend (or get a professional) to have a look. Before the launch date, every bit of criticism can help!

(photo: "Roasted black tea, in cup", CC, by bdiscoe)