Sunday 28 December 2014

Getting ready for another (new) year

We are just a few days away from 2015 and people start talking about (and hopefully also thinking on) their resolutions.

'The master plan to-do list' by
the green gables under
a CC licesnse
For me that is a well known exercise that I try on any new period - not only on new year's day. Alas, I normally end up with moderate results, at best.

Despite that, I'm going to give it another try.

This time I'm thinking of writing things down in a more organised fashion. For instance I'm going to prioritise a few skills (e.g., a new programming language, improving a foreign language, etc.) and also a few qualities (e.g., being more patient). Although I don't have the list ready, yet, I can already sense that, whatever objectives I end up choosing, I'll be having to fight with my fondness for idle time in order to achieve any of them.

Let's how things will go this time :)

Thursday 25 December 2014

Sunday 21 December 2014

Secret Santa

Secret Santa is a (western) Christmas tradition, where, within a group, people get each other presents. There are many variations of the custom but, in general, the recipient of the gift doesn't know whom the gift came from. Apart from that, the people in a Secret Santa group can set the exact rules. For instance, they may set a price range for the gifts or define a general gift theme to be followed, etc.

'Gift' by Joey Rozier
under a CC license
In some places, the Secret Santa custom is a big deal, with decent planning and implementation. I was a bit surprised to find  how-to guides online. I was even more surprised to see online Secret Santa planners, name-drawing tools, etc. Having seen those, it should come as no surprise that there are also apps on the topic (e.g., Secret Santa).

For people that work 8+ hours in the typical office environment, where human interaction is either office smalltalk or purpose-oriented collaboration and where a good 60-70% or more of the work time is carried out in front of a computer monitor, secret Santa is an interesting (and possibly a bit awkward) break.

Exchanging gifts, even low-cost ones, if done the right way, is fun. Looking at a deeper level, though, there is more to that. It forces one, for a little while, to shift their focus and think on the others. It is an opportunity for human contact that may even lead to durable relationships.

(Yes, true. People can take part in Secret Santa and be mean or indifferent but let's put that aside in this case).

Sure, Secret Santa is not amongst the bold or basic things the world needs. It is a small Christmas thing, which, for a short time, can make our lives a tiny bit smilier.




Sunday 14 December 2014

Revamping those to-do lists

'to do list' by Eamon Brett
under a CC license
To-do lists are nothing new. They are simple and humble yet, for some people, precious.

Personally, I'm not too much of a fan of such things. For that, I have paid the price at a number of occasions. However, having forgotten - a number of times - to get the all the things I need from the supermarket in a single go and having gotten things I didn't really need instead, I somehow convinced myself to ride the wave and install one of those 'to-do' list apps on my mobile.

I went for Wunderlist but I soon realised that there are numerous alternatives, such as Toodledo, Remember the milk (!), Asana and many. many others, including Google Tasks, which is tightly integrated with GMail and Google Calendar and the purpose-built Google Keep. Each of those has its pros and cons, some are simpler and more intuitive than others, etc., but all can, in some way, find home in your smartphone and replace that old-fashioned to-do list on a piece of paper.

(BTW, I won't be doing a review of those apps here. However, there are plenty of reviews over the internet, for example in LifeHacker, the Verge and PC World.)

I'm a bit surprised that some many people came up with an idea (or copied the idea) for an app to replace a simple piece of paper. I know, I shouldn't be. After all, this is a kind of useful app with quite some margin for extra features.

People have been creating to-do lists all the time and having them in a digital form does come with advantages, such as the possibility to re-use lists or list items, share them with others, collaborate around them, combine them with work planning, etc. It's just that such lists can easily exist on torn notebook pages and post-it notes and still reach their objective. In that sense, having such apps feels like an overkill but, at any rate, clearly, that will work, too. After all, smartphones are supposed to be much more besides a simple mobile phone and adding the to-do list functionality is another (small) step towards helping us in our daily lives.

Sunday 7 December 2014

Do social media work at the professional level?

'Cape Hatteras lighthouse'
by Cathy under a CC license
For me, that's a tough question to answer (something similar has been asked a while ago on Quora).

There are numerous services there that promise to do their networking magic and boost your profile at the personal or the professional level. But I don't know whether they actually deliver or not.

My personal experience is rather limited on that front. Yes, I have had accounts at various services, including Google+, LinkedIn, Twitter, Pinterest, Facebook, Tumblr, etc. However I haven't really met any new contacts via those, not to mention come across any meaningful professional leads. To be fair, I haven't put any specific effort towards those objectives, apart from the obvious, i.e., setting-up a reasonable profile and keeping - for some time at least - a moderate-to-low level of activity.

There are some, however, that claim that social networking can do miracles. Others, advise caution and careful planning before committing time (and possibly money) into making a good social media profile, while some others identify social media as a means to find resources and learn and, possibly, as a path that - if used carefully - could improve career chances a bit.

I admit that, in theory, one's "right" presence in social media should be a additional plus in one's career efforts. I am also convinced that potential employers (including headhunters?) or, even, potential collaborators seek for information on their potential employees or partners in the social media universe, as well. Thus, a decent presence there is not a bad idea and may be of help to future career steps (but a bad presence is a terrible idea, much worse than no presence at all). However, I really wonder how many people amongst one's social media connections can actually provide one with a job offer or, at least, a job interview. The answer to the latter may be "very few" or even "none". But anything other than that would just make things too easy, wouldn't it? After all, if just a handful of one's social media contacts could offer some sound career advice that would be a welcome thing, too.



Sunday 30 November 2014

Should we add coding to the primary education curriculum?

'Eee Keyboard-PC' by Yoshi5000
under a CC license
Yes, in my humble opinion, we should!

You may think that I have simply been a bit too influenced by the move in Finland to teach typing instead of handwriting in schools. No. In fact, although I see some advantages in introducing courses for typing instead of cursive writing, I wouldn't have gone nearly as far. After all, we still need to be able to communicate, even when electricity is not available.

With coding, however, things are different. As others have explained, coding is more of a way of thinking rather that an exercise for those that have computers. Coding, regardless of the programming language used, requires skills for describing and understanding a problem, possibly breaking it up to smaller, manageable chunks and devising a solution employing logic.

Coding can be taught almost hand-in-hand with mathematics (especially numerical analysis) and I suspect that would help the skills of kids in both fields. I wouldn't need too much time of teaching, either. Most probably, having an hour or two per week would be enough to motivate kids to engage further on the topic.

If the curriculum would also include user interface design, then coding would also blend elements of fine art, psychology, etc.

There would also be additional benefits for pupils, such as learning to collaborate across teams towards solving a particular problem, developing self-confidence in problem-solving, finding additional routes of creativity, seeking for/creating innovation in software, getting better at using computers and software, etc.

As an added bonus, coding does not require expensive infrastructures. It can be done on basic hardware (including tablets, old PCs, etc.) using free software and, today, an increasing number of households own a computer or a tablet. There is also a lot of help for coders available online, including websites with coding courses, communities of programmers, etc. Coding classes could even run without access to computers but I admit that this would rather boring for the kids.

So, yes. Let's give coding a try in schools and, who knows, maybe the coming generations will feature a higher number of brilliant coders or, at least, be better better at using logic against challenges.

The video below features Thomas Suarez (not the typical 12-year-old) giving a TEDx talk:


Sunday 23 November 2014

The app update ritual

'My iPhone family pile'
by Blake Patterson
under a CC license
I've been using computers - for work and leisure - for at least 20 years now. In my early PC days, software updates was a rare thing but usually associated with major changes. Update deployment was, at those times, a fully manual procedure. One had to find a disk with the new software version and install it on the target PC.

With time, internet gained ground and developers started using it as an alternative update distribution vehicle. It has been a very welcome thing, indeed; it is normally an easy process and allows for much more frequent updates.

As the number of our digital devices grows, software updates have become an increasingly important part of our (digital) lives. Smartphones, tablets, routers, intelligent devices (thermostats, smart light bulbs, cameras, even camera lenses) allow for their software to be updated.

The frequency of the updates depends on the product and its developers but for "small" applications and apps it can be very frequent. I have come across Android apps that have had 2-3 updates per week. And there, exactly, I believe I can spot a problem. The update process is beginning to take a bit more time (as well as bandwidth and data volume) than perhaps it should. Taking one's smartphone offline for a day, most probably means being prompted for a few tens of app updates when it is taken online, again.

Has the ease of deploying updates made developers sloppier? Has it increased pressure on them to release software as soon as possible, even if not all features are there and even when the software has undergone only little testing? Or is it just adding value to users, offering them access to new functionality, design enhancements and innovative stuff as they are created? As a software user/ consumer, I 'd very much like to think of the latter, though I suspect that we are mostly victims of the former. To be fair, though, for software and apps I really value, hitting the "update" button is often accompanied with great expectations :-)

There is nothing wrong in improving users' experience through well planned software updates. Needless to say that providing updates to fix security holes or fix critical bugs is a must, too. However, offering updates on too frequent a basis can have a negative impact on users' perception of software quality and come at a cost (users' time and productivity, network's bandwidth, etc.). Is it perhaps time for software developers to re-discover quality practices? Or is the constant updating thing something that we, the software users, will need to get used to (and perhaps, even, taught to like)?


Sunday 16 November 2014

Cloud automation and the internet of things

'Robot' by Christelle
under a CC license
Day-by-day, our lives become increasingly digital. With internet gaining share in our everyday routine it was inevitable someone would start interconnecting our network-capable devices (for which I think I've written about before...).

At the beginning, things were a bit basic. For instance, being able to check our cloud-based mailbox and our automatically synchronising cloud-residing files from all our devices (desktop, smartphone, tablet, etc.).

Then cloud application upped their intelligence a notch. It became - for example - possible to send somebody an email proposing a meeting date, the cloud service would add that date on the recipient's calendar and the recipient's smartphone would remind the user on time for the proposed meeting.

With more-and-more web services, programs and devices having public APIs, cross-application functionality has taken off and the user mashup potential has become evident. It may sound complicated but the fact is that it can simplify our daily lives (and - possibly - increase our geek level, too!). It is now possible to check on and control web applications in order to achieve things that in the past would require a separate web service, app or program.

Let's take IFTTT as an example (IFTTT stands for 'If This Then That', by the way - do check their website!): A user can choose amongst a large list of web services, devices with web output, smartphone events, etc. and when something specific happens to cause a reaction. For instance, User1 can set IFTTT to monitor the Twitter posts of User2 and when a new tweet is posted, IFTTT can send an SMS to the mobile of User1 or email that post to User1's email, etc. Interesting? It can get better. Imagine using it for networked devices, such as a networked thermostat (e.g., a Nest thermostat) or a networked light installation (e.g., Philips Hue) or a signal-producing usb device (e.g., Blink(1)), etc. For instance, you can increase the temperature at home when leaving work or set the lights to the bright setting when an incoming call comes from work. All of a sudden, it is possible to achieve automation that, albeit simple, would be next-to-impossible to do (cheaply) a few years ago.

Needless to sat that IFTTT is not the only player around. Zapier, Yahoo Pipes, We Wired Wed, Cloudwork and others - many others - are available, some for free, some at a cost. I feel certain that more will follow. I believe that what we are seeing is the early days of automation for the masses :-)

Of course, by interconnecting devices and services we are exposing an even larger part of our (real) lives to third parties. This, inevitably, implies risks. Rogue or simply irresponsible service providers may opt to sell our personal data, hackers may gain control of our smartphones, lights, etc. Our privacy may be compromised in ways that may not be immediately obvious, perhaps to directions that we wouldn't really want.

As always, innovation, in itself, is not good or bad. It is just something new. It is up to us to find the best way to use it. To strike the right balance. To shape the market into the form we want, placing the right safeguards and, ultimately, to make our lives a bit better (or funnier... or geekier...), while keeping us on the safe side.

Disclosure note (and some of the usual 'fine print'): I am not affiliated to or have received any subsidy/grant/benefit in return for this post from any of the companies, whose products are mentioned above. Mentioning, in this post, a product or a service is not meant to constitute an endorsement (as I have not, personally, used all those products). The names of the above mentioned products and services are property of their respective owners.

Sunday 9 November 2014

Compatibility: the challenge for digital archiving

'5 1/4 floppy disk' by Rae Allen
under a CC license
Today I've spent a good couple of hours in migrating some 15-year old e-mails of mine from a legacy e-mail client to Thunderbird. It wasn't a difficult process but it did need a bit of research to figure the steps needed to do the job and to try a couple of suggested alternatives. Last week I had a (shorter) adventure in getting text from Word 2.0 and Wordperfect documents. Maybe 15 or 20 years is too much time for the digital world but that won't stop me from mentioning - again - the challenges of forward/backwards compatibility in media and formats.

(Sigh)


Whoever has been using a computer for a fair amount of time is probably aware of the advice to backup their data. He/she may not be actually following that or may not even know how to do it but he/she is very likely to have heard the advice.

There are plenty of reasons to backup one's data. The main one, of course, is security against data loss due to:
  • hardware failure (e.g., hard drive damage)
  • disaster of any kind
  • user error (e.g., file deleted + trash can emptied + free space wiped, file overwritten, etc.)
  • malicious act (e.g., file destroyed by malware of any kind), etc.
For the enterprise environment, backup is (supposed to be) a must. In certain countries, the backup of specific corporate data is mandated by law. Regardless of that, corporate backup tends to be more comprehensive, maintaining data versions, multiple copies, distribution of copies across different media and locations, ideally both on-site and off-site, etc.

Corporations that depend on their data or need to keep a digital archive, inevitably, have dedicated infrastructure and people to take care of their backup needs.

Individuals, though, normally have much less. Yes, there is plenty of software that can take backups both free and commercial. Also, most OSes have some kind of in-house backup-restore utility. However, their user-friendliness and their compatibility across different platforms or, even, major OS versions is not guaranteed.

Even if a user chooses to stick to the same backup solution (which could be something as simple as a plain file copy from one disk to another) there is the challenge of the medium suitability and durability. Anyone who has been using a PC for more than 10 years is likely to have used floppy disks and/or ZIP drives and/or CDs and/or DVDs and/or external hard drives and/or flash drives for their temporary or long term backup. The problem is that some of the aforementioned media are not readily supported by a modern PC, e.g., modern PCs have neither 5¼'' drives to read the old floppies, nor parallel ports to support the original ZIP drives.

In order to be on the safe side, a user keen on archiving should, from time-to-time, migrate data from one medium to another. This is a very tedious tasks, especially if a large number of storage media is involved but let's assume that it is reasonably feasible.

The ultimate challenge is compatibility across file formats and program versions. Common formats that adhere to widespread standards are normally on the clear. Image files, for instance, such as JPEG or GIF or BMP have a long history, so files created decades ago will be displayed by virtually all modern software. The opposite doesn't necessarily apply, i.e., newer versions isn't possible to be displayed by legacy software. When it comes to formats for files not-so-frequently exchanged, however, compatibility may be an issue. Take e-mail files, for instance. Different e-mail clients tend to store e-mail in different structures. Nowadays, where e-mail clients are part of the OS, things tend to be clearer, though a few years ago there was considerably higher fragmentation (e.g., different format for Eudora, Netscape/Unix, Outlook express, Outlook, Pegasus mail, etc.). In fact, today, a large portion of our e-mail stays in the cloud, which sort of solves the compatibility problem, although it introduces a different set of challenges.

Is there a bottom line to this? Well, not really. If one needs to have data from the past, one needs to either maintain legacy hardware and software (which may or may not be possible) or put the effort to migrate the data to newer formats and media. It sounds deceivingly simple, doesn't it?

(The following video is a talk of Chad Fowler from a Scala days conference regarding 'Legacy' in software development - it is a long, not well-lit, but interesting presentation.)



Tuesday 4 November 2014

There is plenty of information around but how much of it can we practically find?

'Another haystack' by Maxine
under a CC license
The frank answer is: it depends; on many things.

First of all, I'm talking about information that is available on the internet. That excludes books that are not available online, databases that run locally, etc. More specifically, I'm talking about information that has been indexed by at least one search engine, at least at the level of general content description. I'm not differentiating among the different types of information, though.

Estimates of the size of the internet in 2013 spoke about 759 million websites, of which 510 active, which - in turn - host some 14.3 trillion webpages. Google has indexed about 48 billion of those and Bing about 14 billion. The amount of accessible data is estimated to be about 672 million Tb (Terabytes), which likely includes the indexed and part of the deep web content.

On top of that, we have the dark internet - but this is a different thing.

So, there is a lot of information indexed (and much more that lies beyond indexes). Year-by-year we are getting more-and-more used to using and relying on the internet. But how "much" useful information can we normally find?

Assuming we are talking about seeking for "general information" the main search tool is a search engine. While common search queries return tens of millions of results, most users tend to focus on the first few hits. SEO experts often talk about users sticking to the first 5 search engine hits or - at most - the results of the first page. Some disagree but still very few users go through all the results. Of course, persistent people seeking for specific information do tend to try different search queries in order to reach reasonably relevant information.

The interesting point regarding search engines and their results is that those results on the 1st page are very valuable. So the question is: if some invest in placing their content on the top of the search results, how can the user find relevant content, if that content is maintained by people not willing to invest on SEO, e.g. by a non-profit or just enthusiast individuals?

Of course, search engines use result ranking algorithms that take into consideration a very long list of factors. Content quantity and quality are amongst those factors; popularity is another, etc. However, the way those ranking algorithms work (the exact formula is kept secret) may include - e.g., in the case of Google - a ranking bonus for content of the user's Google plus contacts. They may also include a fading mechanism, where very old, possibly unmaintained, information is ranked below the recent one.Websites offering content over secure connection (via https instead of plain http) get a bonus, too, etc.

All those twists and fine-tuning are meant to help the "average user" (I guess) reach the content they need, while at the same time giving a change to content providers (including companies investing in advertising and SEO) reaching their target audience. Most of the time, advanced users will employ additional tricks to refine their searches but (I assume) any ranking algorithms work in the same way for them, too.

Needless to say, that when search engines (and Google in particular) modify their ranking algorithm, many people worry and many people get busy.

To make things slightly more challenging, content on the internet tends to change with time. Webpages may disappear due to technical reasons. Links to content may be hided in some regions due to the right to be forgotten (a very interesting topic, on its own). Or content may be removed due to a variety of reasons, e.g. copyright violations or, even, DMCA takedown notices.

The point is that finding the information one wants needs persistence, intuition, imagination, good knowledge on how the search engines work, sufficient time and luck (not necessarily in this order). The problem that remains is that this information is very likely to represent part of the whole picture.

Some will say that this has always been the case when seeking for information. True.But now that accessible information feels "abundant", the temptation to stop looking for new data after the first few relevant search engine hits is really strong.

Unfortunately, responsibility still falls onto the user, to be wary of gaps or biases of any kind and keep looking until the topic in question is properly (or reasonably?) addressed. It's not an easy task. With time, however, it's likely that we'll develop additional practical norms to handle it.


Sunday 2 November 2014

The Dunning-Kruger effect...

'Neon Jester' by Thomas Hawk
under a CC license
...or 'confidence and competence are two very different things' or, at a more direct approach, 'never attribute to malice that which is adequately explained by stupidity'.

The Dunning-Kruger effect is the condition where one feels confident for one's performance, despite the fact that one doesn't have the required skills. At the same time, skilled individuals may lack confidence because they assume that they are no better than their peers. Thus, self-evaluation tends to work in different ways in skilled and unskilled individuals, with the former being more critical to their performance while the latter failing to realise their shortcoming.

The Dunning-Kruger effect manifests itself in many parts of everyday life and could help explain several of the shortfalls we witness around us. For instance, managers that may have been selected for their confidence and overall attitude, may be prone to repeated errors of judgement, if they are not skilled on the subject matter of their business. Since a tall, multi-layer management structure is commonly adopted across many sectors, such cases might be more common than one would think.

However, the effect does have its limits and it can be mitigated or, even, avoided. The fact that it does not demonstrate itself at the same intensity across different cultures indicates that it is affected by the way people are raised and the environment they are exposed to. It also suggests that is can be addressed though the education system, which would also work on the approaches that people use for self-evaluation.

Indeed, we need to ensure that people understand the value of expertise, especially when we are talking about people that go up the management ladder. We also need to make experts more visible and accessible, in particular to people in power. More importantly, we need to find ways to promote teamwork and encourage the formation of multi-skill (and possibly also multi-cultural) flexible groups within organisations, not being afraid to use flat or matrix organisational structures, so as to ensure that problems are correctly identified and assessed and that solutions are well-conceived and implemented.

These are easy things to say but would require plenty of small changes in order to ensure that such system would survive. For example, remuneration, benefits and motivation perks would need to be allocated under a modified rational. Appraisals would also need to be carried out in a different way. Quality practices (which normally do assume that tasks are carried out by suitable experts) may also need to be adapted.

Dilbert by Scott Adams, Strip of 26/08/1992

Tuesday 28 October 2014

What is the must-have knowledge and how does one protect it?

'Book stack' by ginny
under a CC license
There was that Slashdot article on survival knowledge that got me thinking again. The question is: (i) what is the minimum knowledge that human civilisation should have in order to kick-start itself after a catastrophic event of some sort and (ii) how does one effectively preserve that knowledge?

There is even a book on 'rebooting civilization' and I'm sure that there plenty more works on that question, which - by the way - is not at all uncommon.

The second part of the question seems simpler. There is no ultimate backup medium and we already know that the internet is no safe bet. Modern technology is good and sleek but it can fail, too (plenty of personal experience on that front). So, really, modern non-magnetic storage media, such as DVD's, seem like a decent backup solution but they haven't been tested against time, yet. Magnetic media are now reliable for operation in the scale of 5-10 years but one shouldn't expect miracles. The 'cloud' could do better, since the storage equipment is maintained but then access to the stored data can't be guaranteed. For an digital storage medium and format there's the additional challenge of compatibility with future (or past) equipment.

I hate to admit that but as a storage medium, paper has served us reasonably well. Despite it being fragile, compostable, flammable, etc. Amazing, isn't it? And by some tricks we could store even more per page, even though that would make pages illegible to humans (for example, the QR code below, contains the first 2 paragraphs of this blog entry - and, yes, it can be printed smaller and still be readable by a smartphone).


To settle the argument, let's say that we use a combination of media and storage methods to be on the safe side. What should we put on those? The Survivor library that the article mentioned has an interesting selection of topics that range from 'hat making' and 'food', to 'anesthesia' and 'lithography'. Several state-of-the-art areas are missing (but that may be the point) and so do some well-established disciplines such as mathematics and physics, while with some topics, we could possibly do without.

Some have proposed keeping a copy of Wikipedia at a safe place. Yes, Wikipedia can be downloaded (its database dump, at least) and the size - so far - is said to be about 0.5 Tb (i.e., 512 Gb or about common 110 DVDs) with all the media files included.

There are also several physical and digital archives. Some specialised, other more general. The Internet Archive is an interesting approach, as it keeps snapshots of the various websites at various times. Not necessarily useful for the survival of mankind but interesting, anyway.

Another tricky bit that may not be apparent is that knowledge can only be effectively used by skilled people. So, not only do we need the knowledge but also a group of people with sufficient expertise to put that knowledge in good use. And then we need materials, resources, tools to allow for knowledge to be put into practice....

Hmmm.... Saving the civilisation seems to need a lot of thinking, after all :)

Sunday 26 October 2014

Art as a commodity

'Street art @ London' by
Alex Abian under a  CC license
I am not an art expert and I'm sure that I don't have the right 'eye' for art. However, there are pieces of art, paintings, buildings, graffiti, etc. that I find 'active', in the sense that they seem to be able to cause an emotion on me.

At the very basic level, art is an object of some sort. For instance, a painting is a surface with drawings and, possibly, colours on it, arranged in a particular way. They may or may not resemble a physical object or a setup. The result may or may nor look nice. But it will evoke an emotion on some people, perhaps under certain circumstances.

To my understanding, regardless of their other functions, the works of art are a kind of commodity. A very special one, certainly different from the other goods. So my question is, how does one put a certain price tag on a work of art?

Yes, I assume supply and demand is involved. But that shouldn't be the only factor. The reputation of the artist? Yes, that too should play a role. The views of the critics? That, too. Other factors such as the 'collectible' value also play a role. The effect the work of art has on people? Hmmm.... I'm not sure that this really counts.

Well, it is clear that I won't be reaching a conclusion here. To me, handling art as a commodity feels a bit strange. The only thing I'm certain about is that art, in all possible forms and prices tags, can be a very welcome addition to our everyday life!


Sunday 19 October 2014

If we want innovation we may need to re-think on the right to fail

'Failure' by Beat Küng
under a CC license
Success and failure are two terms that we come to meet very early in life. The paradox is that, while we learn and develop through failure, ultimately reaching success, later in life, we tend to look down on those who do not succeed.

Certainly, there must be an evolution element involved in that attitude of ours. Clearly, success is the desirable outcome. When it comes to making breakthroughs, though, regardless of whether those are disruptive innovations or smaller forward-leap ideas, trial-and-error or - in plain english - failure is part of the process. Our stance on somebody 's failure normally includes elements of constructive or not-so-much criticism and sympathy at ratios that vary according to our ties with the individual in question and the impact of the individual 's failure.

At any rate, despite the fact we know that failure is part of life, which may even lead to success, we often 'forget' that people have the 'right to fail', at least to some extent.

Interestingly, our legal and business norms seem better prepared to handle failure than our social instincts. Entrepreneurs can go bankrupt, for instance, and start over after a while. First offenders get a 'lighter' treatment in the justice system. In each case, of course, the impact of failure on the individual does vary - and there is always some negative impact and maybe even some longer lasting effects.

So the question is, how do we shape things in such a way that the fear of failure does not hinder innovation, including innovative thinking, innovative design, innovative practices, etc., while the impact of a likely failure is contained reasonably well?

I'm not sure I have the answer to that. But there are things, both related to the effort towards success and to the (potential) failure, most of there already tested and proved, that may help:
  • Make advice easily available to innovators. That may be through free research or business development services, through subsidies available for consultants, etc.
  • Develop a network of mentors available to support innovators. Having a mentor solves the problems of 'what is the right question to ask a consultant?' and 'how to I prioritise tasks?'. Such schemes - to my knowledge - have been limited to mostly within academic and large corporation environments. Maybe it worth considering how to deploy such scheme to emerging innovative entrepreneurs.
  • Encourage step-wise development. Such steps would limit the cost of failure at each step with the added bonus of better awareness of all opportunities as the 'product' matures.
  • Encourage pooling of resources and diversify investment. Now that is a tricky one. It can apply to both enterprises and investors, including financial institutions. The former may not have the capacity to adopt such approach but the latter, most likely, have something like that already in place. The problem is how to correctly estimate the risk for each investment, so as to allocate reasonable funds in a reasonable way. There, both underestimating the risk and overestimating it leads to serious problems for the innovation system.
  • Provide guidance after (potential) failure. Yes, seriously. Failure doesn't always have to be an abrupt halt but innovators should have the means to access what went wrong and if/ how it can be fixed. And yes, the next step is to provide resources after (potential) failure, should things prove to be fixable.
  • Promote success stories.
  • Encourage the innovative thinking of students within the education system. That should be a no-brainer, yet in practice we choose to be on the conservative side. There many ways to do that; gamification of the challenge could be one of the alternatives. To be fair, however, that is no easy task - especially if the education system runs under limited resources. In any case, it should include advice on how to deal with failure at the factual and - possibly - at the emotional level.
The list, above, is only indicative. The bad thing is that they all come at a cost and that the potential benefit is linked to the (perhaps risky) innovation at the end of the chain. The good thing is that such measures can be applied within different environments and at suitable intensities, minimising risk while still being able to reach (and study) results.

And, for the end, a couple of relevant TED talks. As usual, inspiring to watch :)




Sunday 12 October 2014

Could we increase ideas' diversity by simply switching languages?

'Language' by
<leonie di vienna> under
a CC license
The idea that language and thought are interconnected is not new. Discussion is very much ongoing on whether it is the language which is shaped by perception or vice-versa. A number of interesting examples surface from time to time, demonstrating the link among language, perception and - possibly - thinking.

For instance, Pormpuraawans, an aboriginal community in Australia, use cardinal directions in their speech (north, south, east, west) instead of relative ones (left, right). This seems to be associated with a very high awareness of orientation, even indoors. As described by L. Boroditsky, those people, when given a series of cards representing images of temporal nature (such as an aging man), they used the east-west orientation to put them in order, while subjects using English would use a left-to-right order and Hebrew speakers a right-to-left order for the same thing.

While all those are, indeed, interesting, I find intriguing the idea that by simply switching languages our perception of reality may shift. I sounds like being able to change viewpoint above a problem or think out-of-the-box taking that one easy step (if one is bilingual or multilingual, of course).

I do remember one of the teachers of mine supported that in order to learn and then master a language one should stop merely translating from one's mother tongue to the new language but, instead, think what one needs to say in the new language altogether. I know, it sounds confusing but there may be some truth in that advice. To my small experience, different population groups think differently, their language often reflects that and using that language helps a foreigner understand that different way of thinking, at least if he/she has been exposed to the corresponding culture.

Should the facts be right and the hypothesis on the 2-way link between language and thinking be valid, there is certainly considerable potential here. Imagine that one could instantly enhance opinion diversity simply by having a group discuss a topic in a different language or by keeping notes - and later reviewing them - in a different language or, even, by producing - at a later stage - a summary of thoughts and decisions in a different language. Alternatively, in a more traditional approach, one could try mixing people with different mother tongues in the same working group, although that may not always be feasible or practical. Some thoughts on activities, actions or interventions that may sound less likely to be successful or too unconventional in one language may sound perfectly reasonable or manageable in another. That could be simply because the two languages may be linked to societal perception of different dynamism. Of course, all that assumes that people are well immersed in the second language they use, which typically happens when they have a very good level in that language. Such people, however, are increasingly more common today. There is, unfortunately, the catch that regardless of how good or bad something sounds in a discussion, implementing a decision will be having its own effect (good or bad) independently of the discussion that preceded. Still though, views diversity should be a plus for identifying problems, solution, risks and opportunities.

At any rate, while not the only such approach, this is a route that should be easy to explore since there is no extra cost involved (since most people tend to know a second language, anyway). Maybe it will prove too good to be true, maybe not. Well, having said that, it will be feeling rather awkward and unconventional, at least at the beginning, but - hey - there is no real harm in trying that once or twice :-). After all, because of the internet, international collaboration, globalisation, world politics, etc., using a language different to one's mother tongue is not that rare any more...

Wednesday 8 October 2014

Simplicity; the all-too-common target we normally miss

'Beauty in Simplicity' by Clay Carson
under a CC license
Can you recall the safety demonstration that is performed just before take off in every flight? It is basically about just 4 things (seat belts, oxygen masks, life jackets, emergency exits and route to them). That simple. The bare minimum information that can save lives in case of emergency within a plane, which, by the way, is a very complex machine.

I like simplicity. Most people do so, I believe. But I'm used to things around me being complex and requiring handling of a certain complexity. 

In some cases, simplicity may be a matter of taste. For instance, minimalist architecture, minimalist design and minimalism, in general. Then, it may be a matter of function or usability. For example, the one-button mouse that Apple introduced or the bare interface of GNOME or Xfce, the operation of Microsoft Kinect and so on. And, of course, we have simplicity in processes and procedures (administrative procedures included), with the one-stop-shops and lean manufacturing or lean management concepts as examples.

To me, the latter is of utmost importance. Simplicity is the approach that saves resources, helps transparency, facilitates participation, minimises mistakes, encourages standardisation, etc. For instance, could you imagine referendums with complex what-if sort of questions? I hope not. That level of simplicity should be a target for most processes and procedures around us. The tax forms, the procedures for establishing businesses, the formalities of communication across public or private organisations, the procedures for public consultation, etc.

Of course, many will argue that a one-size-fits-all approach doesn't really work in all aspects of life. True. But I believe that the challenge is to apply simple models on small groups of applications in a coherent way rather than trying to use a single process for all applications. However that is no small feat. Mistakes will be made, corrective actions will need to be taken and a new 'simplifying' circle will need to start. And there lies the hidden challenge: frequent changes cause confusion, regardless if each new approach is a simple one.

Simplicity (and clarity) is a thing that we could certainly use more of. At the collective level, it could allow things to function better and at a lower cost. It would cut down red tape and limit confusion. At a more personal level, simplicity has the potential to make our lives better and give us the chance to focus more on things that matter, undistracted from clutter, regardless of those 'things' being people, causes or creations of any kind.

So, once more, is there a limit to simplicity? Most likely yes. But we have still plenty till we hit that.

Sunday 5 October 2014

Do we make the most out of (computing) technology?

Typewritter photo
'Typewriter' by Reavenshoe Group
under a CC license
Sadly, the brief answer is no. Most of us have in our hands, at home or at work, computing or other electronic hardware that would have been considered pure fiction 20-30 years ago. Although we have changed the way we live and work due to technology, the steps forward we have made don't necessarily go hand in hand with the leaps in technology we have witnessed.

Of course there are exceptions to the observation above but let me mention a couple of examples and tell me whether they sound familiar or not.

At the place that I work, all employees have PCs. Their (the PCs') primary tasks are e-mail, word-processing and printing and web browsing (not necessarily in that order). Yes, sure, so people do some statistical analysis, some DTP and some database design and some feed input to a number of databases but, still, the majority of PC time is devoted to the three things I mentioned before.  You may think that the volume of work or the quality of the output has increased. Indeed, it may. But there is still a small number of regular PC users that treats word processing software closer to a typewriter than a modern PC. OK, I'm exaggerating here but I believe you can see my point.

The other major change has been in the field of mobile devices. Each smart phone is practically a small computer, powerful enough to handle not only calls and messages but also browsing, voip and video chat and practically most of the stuff that would run on a desktop computer. Do people use those features? Yes, some people use some of those. But some others seem to have problems with that new technology. The following infographic shows an approximate breakdown of the various uses of smart phones.



According to the infographic above, new stuff (web, search, social media, news. other) account to a moderate to low 24% of the time of smart phone use. An interesting question would be if the total time interacting with smart phones is higher than before, when we had plain mobile phones. I suspect it is.

So why can't we make more and different things now that we have such computing power in our hands?

I don't really know (I'll be doing some guessing here) but here are some possible reasons:
  • Bad design on the user interface. Yes, all manufacturers and software designer call their interfaces intuitive but that is not always the case. To make things worse, I don't believe that there is the perfect user-friendly, intuitive interface. It will always need persistence, imagination and luck to get to use an interface successfully. But there are design basics that can help. Below there is an early (very) critical review of Windows 8 (which btw I rather like as OS)


  • Crappy or buggy software; Software incompatibilities; Software complexity; Inconsistency across platforms and devices; Lack of decent manuals or efficient tutorials. Lack of user training (it sounds old fashioned but in some cases it could help).
  • Software cost and/ or poor use of open source software. This particular point always bugs me. It 's fine to pay for software that enhances productivity. But why do businesses avoid to invest in open source software in a coherent way? Especially in cases where the open source alternative proves better in usability, compatibility and, well, cost.
  • Hardware restrictions. Yes, you read correctly. We have plenty of processing power but we may be having other limitations that hinder full use of that power. For instance, smart phones can do a lot but they need to be reliably connected to a fast network. That comes at a cost that in many cases is undesirable or, even, excessive. Another example is modern PCs that are powerful but often they come with the minimum possible display estate. Just adding a second monitor would boost productivity (and save on printer paper) but the majority of workplaces I know of stick to small single monitors (often badly positioned in front of the user). Another all-too-common thing is policy restrictions in the use of PCs, some of which severely impact usability, especially when that is paired with an IT department that refuses to listen to the users' needs.
  • IT departments that are overloaded with the typical tasks and don't have the resources to add new capabilities to their systems (an extra programmer could do miracles under many circumstances).
  • No reliable communication between (casual) users and developers to assist new product development or product improvement (yes, there are beta testers and developers can gather telemetry data but this is not even close in magnitude to what I refer to).
The disappointing thing is that most of the problems above are not-so-hard to address. Maybe the entire product-market-user model needs some rethinking. Maybe developers and, possibly, manufacturers, need to put more effort on durable platforms and commit to their support for longer periods. And, finally, maybe we, the users, need to be more conscious of our options/ choices and voice our thoughts/ wishes/ concerns when needed. Just saying....


Thursday 2 October 2014

What does it take to make an active citizen?

'Smile! It's Contagious' by
Daniel Go under a CC license
An active citizen is a citizen with the proper sense of responsibility towards society. That's, indeed, a vague and ambiguous statement that can hardly serve as a definition. Just to contribute more to confusion, active citizens are not necessarily activists - at least not under the negative light that occasionally has been shed on the term. The problem is that there is no formal definition for active citizens, just examples placing them as the good guys of society, the ones doing the right thing, from respecting the environment to voting and from properly voicing their opinion to volunteering for a good cause.

Does a society need active citizens? Certainly yes! Could a society do without those? Maybe. But it would need to heavily rely on other mechanisms to ensure its proper function, should the majority of its members choose not to fulfill their responsibilities. Imagine, for instance, a society where people would neglect the environment and, instead, only pollute.

Active citizens can drive societies further ahead of what laws and established norms could, on their own, achieve. They could do that possibly at a lower total cost, mobilising more diverse resources and, most likely, managing them more effectively.

The question is, how does a society (or a state) encourage active citizenship. Especially in times of weak economy and overall uncertainty. What do people need in order to grow from plain individuals to active citizens within a dynamic society?

Inevitably, I'll be doing some not-properly-documented brainstorming here but feel free to correct me:
  • People need to be inspired by someone or something. This could be done via a role model, a motivational speech, work experience, culture, personal interactions with others, etc.
  • People need the time to process ideas, to reflect, plan, discuss and reach decisions.
  • People need the space and the means (or resources or support) to implement their decisions, to setup, run and monitor their plan.
  • People need to have margin for failure.
  • Should success come, people need to be able to benefit, at least morally and emotionally so as to, in turn, inspire others.
That's a small collection of 5 rather naive and quite ideal points. In practice people won't have the privilege of all those - at least not at the same time. However, there are feasible steps that societies/ states can take to make the environment friendlier to active citizens. For instance:
  • Setting/ improving a clear and easy-to-comprehend legal framework for citizen welfare (health, education and further development, employment). Just for the sake of the argument, having healthy work environments with proper time of paid leave and decent minimum wages would allow people to think beyond work as a means of survival. Adding incentives could help a lot (e.g., leaves for charity work).
  • Encouraging corporate social responsibility both in the private and in the public sector so as to benefit society directly but also to further expose people on (some) values with relevance to active citizenship.
  • Establish policies providing a framework for citizen initiatives and (some) access to resources, e.g. simple processes for establishing non-profit CSOs, providing access to data, allowing access to and use of public spaces, providing public funding for certain citizen initiatives, providing legal advice and business plan support, etc.
  • Promote the culture of active citizenship, e.g., via education or via promotion of successful initiatives.
  • Interact with citizens - active and not-so-active - in motivational ways (invite input, listen, discuss, provide feedback).
  • Adopt good practices, invest on results and work towards causes highlighted by active citizens, etc., thus demonstrating that getting actively involved leads to positive change and benefits society.
I'm sure that there is more to add to the list. I'm also well aware that measures to that direction do exist in nearly all countries in the EU. However, there is always more that can done, even with little resources available. At challenging times, this is a promising path worth investigating...

[Of course, many, many others have voiced (a variety of) thoughts on this topic in the past. The video below is from the TEDx talk of Dave Meslin. Some of his points do have local only relevance but most apply rather widely.]


Monday 29 September 2014

Data clouds

'Bowl of clouds' by
Kevin Dooley under
a CC license
I was sorting out my (digital) photos the other day. Browsing, cropping, retouching, titling, tagging, sharing and all those things that normally follow the transfer of photos from the camera to the computer.

[This is certainly a point where I could say that in the (somewhat) old days, the film days, things were much easier. All one had to do was shoot a film (36 shots at most, which would normally need days, weeks or even months to finish), take it to a store to have it developed and then select a few nice prints for the photo album or, even simpler, stack them in a box and put the box aside. Sharing photos would mean having reprints made, which was not the most pleasant processes, which, in turn, is why many people I know of used to order two sets of prints straight away.]

Regardless, I won't be comparing with the old days on that level. Partly because I enjoy taking photos and I don't mind all the post- steps. The only thing I may be missing a bit is the getting together with friends to show the photos but that's another story.

I do like, however, to preserved photos in some way, in an organised fashion, if possible. I think of them as little pieces of (my) history; bits of memory that will - eventually and inevitably - begin to fade from my mind. In the film days preservation was not really an issue. The prints could last for years, maybe decades. The negatives could/can last for more. Today, digital copies, photo files are thought to last forever. Correct? Well, not precisely. They can last for as long the medium that holds them lasts. And here is where problems begin to arise.

The data volumes we are talking about are rapidly increasing. Modern cameras make shooting photos really easy. They won't be making us pros but for sure they give us a very high success rate in terms of "acceptable" photos. Those are the ones that we are likely to want to preserve. With increasing camera sensor sizes and pixel densities photo files have increased in size. A 16 MP camera would give JPGs of 4 or 5Mb, depending on the compression level. The corresponding RAW files would be about 16Mb.

To cut a long story short, it is easy to gather a photo collection 100-200-or more Gb after a few years of using a modern digital camera. In itself, that is no problem. Modern hard drives can hold a few Tb of information and still be reasonably affordable. But are they reliable? Yes, they are. Do they fail? Not too often but occasionally they do. I had a drive failing within its warranty period and another something like a few months after it expired. Regardless of the cost, getting parted with several thousand photos of mine - little pieces of history, as I called them - wouldn't have been pleasant at all. Those two times I was lucky - I had more-or-less decent backups.

So, there you have the challenge: having a backup strategy (and a data restore plan), which will secure both the files themselves and their associated data (e.g., album structures and anything not within the files themselves) and will gather those files from all the different computing platforms in use (PCs, laptops, tablets, smartphones, etc.).

The various cloud services offer a truly tempting backup alternative. Google does it for every photo one takes from an Android device and can do it with PC content as well (I believe - I have never tried the latter). In could storage services Google already has several competitors - Dropbox, OneDrive, Flickr (for photos) and many others.

Having one's data (photos, in this case) in the cloud comes with a great deal of pluses: It is a kind of backup, the backup of that backup is somebody else's problem, it keeps content accessible from anywhere, it makes content sharing simple, it is easy to use, it is affordable or - even - for free. OK, that last bit regarding cost does vary on the data volume needed - 100 Gb won't be available for free.

Is the cloud truly reliable? Hmmm.... Yes it mostly is. Does it every fail? Hmmm.... Yes, it does. Or, at least it may fail providing access to one's data when one needs them. Occasionally cloud services close or change their terms of service, etc. That may or may not be bad thing. It happens, though. Then, there is the question of bandwidth: how much time does one need to recover the data, if needed? Is that any easy process? And finally, there is the question of privacy: what privacy level can one expect with one data if those are stored on the cloud? The answers to these last questions vary depending on the cloud services provider. And on one's confidence on the provider's policy.

Let's face it realistically, however. In a lot of real-life scenarios, cloud storage is highly practical. The cloud offers options and capabilities that local storage can't easily match. At least not within the IT resources range an everyday person can maintain. But this doesn't mean that one shouldn't have a copy of one's dataset in a medium at a hand's reach. After all, when it comes to photos, those are little bits of personal history that we are talking about :)

Sunday 28 September 2014

On September and personal resolutions

Autumn? -- Fallen tree leaf
'Autumn?' by dr_gorgy
under a CC license
September has always been a time for reflection for me. It marks the end of summer, the beginning of the rain season - climate change aside - the beginning of the school year (in many places), the end of the holidays (in the north hemisphere), etc. It is no coincidence that, besides me, many argue that true, real-life calendar years should start in September, not January.

I find myself considering "new year resolutions" in September. Well, I do that in January, as well, but that's not my point. In a sense, it may be easier to accept objectives in September, simply because one is already in "work mode". One can assume action or put something into practice immediately. Contrary to that, in January one tends to think on new year resolutions during the festivities, when - I believe - it's easier for one to be unreasonably ambitious regarding life objectives.

Personal resolutions are tightly connected to the need of people for hope. Not so much as plain wishes are, that is, but they do represent the intend to act towards a better life. The irony is that September, with its normal season change, its clouds, rains, etc., is a month that, for some people, helps depression kick in. Maybe that is why people take the time, in September, to decide on actions and objectives, start new activities, embrace new lifestyles, etc. For sure it is a great way to self-motivate oneself, which is also great for the people around us.

Thus, I believe, "happy new season" wishes are in order! May our wishes come a bit closer to reality this new school year...




Monday 15 September 2014

Pressing that 'send' button....

'Mail Box' by zizzybaloobah
under a CC license
E-mail is nothing new. Actually, in terms of internet communication means, it is rather a dinosaur of communication.

Since e-mail's first steps in the '70s, lots have happened both to e-mail as a technology, itself, and its competing internet-based communication services. In 2012, the 3 main web-mail providers had over a billion users with Google leading the race. So far, e-mail looks as if it is here to stay for the decades to come. But do we really need it?

Back at e-mail's early days, it was soon understood that if e-mail would be any good for communication, it needed to be able to offer interoperability among networks, servers and clients. It achieved that following a development course strongly based on a series of (then) emerging standards, each adding new capabilities to the service. Thus, the early text-only services can now support attachments, formatting, multimedia contents, different recipient groups, etc.

E-mail was designed to emulate the classical snail-mail exchange in the digital world. Some of the associated terminology reflects that (e.g., CC). That is not necessarily a bad thing because, well, people do need to exchange information in an affordable, flexible, reliable and resilient way - I'm talking about information that would normally be delivered by snail mail some years before.

At this point, it would be useful to look at what people tend to do online. The following infographic (2012) can give us a few clues:


How People Spend Their Time Online
Infographic by- GO-Gulf


E-mail/ communication is a strong factor, taking some 19% of the online time. Social media, however, are ahead with 22%. Plus, the trends shown concern service niches well away of e-mail communication.

E-mail use seems to be declining amongst teens, who seem to favour instant messaging. E-mail seems to look a bit too formal and needs more time between typing the message and being ready to press 'send'. In fact, in several instant messaging services (e,g,, Skype) there is no 'send' button - the user can just press 'Enter'.

Personal experience also suggests that the bulk of my e-mail use could easily be replaced by instant messaging. I've caught myself using e-mail as an instant-messaging client from time-to-time, pressing 'reply', typing just a few words and pressing 'send', often 'forgetting' to change the subject line or re-check the recipients. From time-to-time, I also use e-mail as a 'mini-forum' service, exchanging comments with a small group of people. To be honest, that last thing often gives ground to considerable off-topic-discussions, with the occasional (polite) trolling but that happens when using personal e-mail so I guess it's OK.

Despite all those indications of e-mail's weaknesses, its biggest strengths are flexibility, acceptance and standardisation. All three are boring stuff but, really, they make a difference:
  • E-mail is flexible in the sense that one can use it for short or long messages, to one or more recipients, with any formatting, attachments, etc. Contrary to that, most instant messaging solutions (Skype, WhatsApp, etc.) are much more limited in terms of formatting, message size, attachment options and recipient groups.
  • Most likely recipients have an active e-mail address (businesses, public bodies, individuals, etc.). The same doesn't apply to instant messaging services (that thing with google+ and its messaging capabilities is a different story to user handles instead of e-mail addresses is a different story). Further to that, e-mail communication is considered as a formal means of communication in an increasing number of countries and organisations. Instant messaging services have the aura of 'informal' and 'casual'.
  • At the technical level, e-mail services are well established. One can use any e-mail client to address any recipient and the chances are that the recipients will receive and be able to read the original message as the sender intended (most of the times, at least). If delivery fails for some reason, the system tells the sender what went wrong. Contrary to that, there is no true 'universal' instant messaging service. People tend to be restricted with the social medium of their choice.
So yes! We still need e-mail. While not impressive  anymore, it does what it does quite well. And by forcing us to press that 'send' button, it provides us some time to reflect on what we are about to send. The latter can be life-saving under many circumstances.