Sunday 30 November 2014

Should we add coding to the primary education curriculum?

'Eee Keyboard-PC' by Yoshi5000
under a CC license
Yes, in my humble opinion, we should!

You may think that I have simply been a bit too influenced by the move in Finland to teach typing instead of handwriting in schools. No. In fact, although I see some advantages in introducing courses for typing instead of cursive writing, I wouldn't have gone nearly as far. After all, we still need to be able to communicate, even when electricity is not available.

With coding, however, things are different. As others have explained, coding is more of a way of thinking rather that an exercise for those that have computers. Coding, regardless of the programming language used, requires skills for describing and understanding a problem, possibly breaking it up to smaller, manageable chunks and devising a solution employing logic.

Coding can be taught almost hand-in-hand with mathematics (especially numerical analysis) and I suspect that would help the skills of kids in both fields. I wouldn't need too much time of teaching, either. Most probably, having an hour or two per week would be enough to motivate kids to engage further on the topic.

If the curriculum would also include user interface design, then coding would also blend elements of fine art, psychology, etc.

There would also be additional benefits for pupils, such as learning to collaborate across teams towards solving a particular problem, developing self-confidence in problem-solving, finding additional routes of creativity, seeking for/creating innovation in software, getting better at using computers and software, etc.

As an added bonus, coding does not require expensive infrastructures. It can be done on basic hardware (including tablets, old PCs, etc.) using free software and, today, an increasing number of households own a computer or a tablet. There is also a lot of help for coders available online, including websites with coding courses, communities of programmers, etc. Coding classes could even run without access to computers but I admit that this would rather boring for the kids.

So, yes. Let's give coding a try in schools and, who knows, maybe the coming generations will feature a higher number of brilliant coders or, at least, be better better at using logic against challenges.

The video below features Thomas Suarez (not the typical 12-year-old) giving a TEDx talk:


Sunday 23 November 2014

The app update ritual

'My iPhone family pile'
by Blake Patterson
under a CC license
I've been using computers - for work and leisure - for at least 20 years now. In my early PC days, software updates was a rare thing but usually associated with major changes. Update deployment was, at those times, a fully manual procedure. One had to find a disk with the new software version and install it on the target PC.

With time, internet gained ground and developers started using it as an alternative update distribution vehicle. It has been a very welcome thing, indeed; it is normally an easy process and allows for much more frequent updates.

As the number of our digital devices grows, software updates have become an increasingly important part of our (digital) lives. Smartphones, tablets, routers, intelligent devices (thermostats, smart light bulbs, cameras, even camera lenses) allow for their software to be updated.

The frequency of the updates depends on the product and its developers but for "small" applications and apps it can be very frequent. I have come across Android apps that have had 2-3 updates per week. And there, exactly, I believe I can spot a problem. The update process is beginning to take a bit more time (as well as bandwidth and data volume) than perhaps it should. Taking one's smartphone offline for a day, most probably means being prompted for a few tens of app updates when it is taken online, again.

Has the ease of deploying updates made developers sloppier? Has it increased pressure on them to release software as soon as possible, even if not all features are there and even when the software has undergone only little testing? Or is it just adding value to users, offering them access to new functionality, design enhancements and innovative stuff as they are created? As a software user/ consumer, I 'd very much like to think of the latter, though I suspect that we are mostly victims of the former. To be fair, though, for software and apps I really value, hitting the "update" button is often accompanied with great expectations :-)

There is nothing wrong in improving users' experience through well planned software updates. Needless to say that providing updates to fix security holes or fix critical bugs is a must, too. However, offering updates on too frequent a basis can have a negative impact on users' perception of software quality and come at a cost (users' time and productivity, network's bandwidth, etc.). Is it perhaps time for software developers to re-discover quality practices? Or is the constant updating thing something that we, the software users, will need to get used to (and perhaps, even, taught to like)?


Sunday 16 November 2014

Cloud automation and the internet of things

'Robot' by Christelle
under a CC license
Day-by-day, our lives become increasingly digital. With internet gaining share in our everyday routine it was inevitable someone would start interconnecting our network-capable devices (for which I think I've written about before...).

At the beginning, things were a bit basic. For instance, being able to check our cloud-based mailbox and our automatically synchronising cloud-residing files from all our devices (desktop, smartphone, tablet, etc.).

Then cloud application upped their intelligence a notch. It became - for example - possible to send somebody an email proposing a meeting date, the cloud service would add that date on the recipient's calendar and the recipient's smartphone would remind the user on time for the proposed meeting.

With more-and-more web services, programs and devices having public APIs, cross-application functionality has taken off and the user mashup potential has become evident. It may sound complicated but the fact is that it can simplify our daily lives (and - possibly - increase our geek level, too!). It is now possible to check on and control web applications in order to achieve things that in the past would require a separate web service, app or program.

Let's take IFTTT as an example (IFTTT stands for 'If This Then That', by the way - do check their website!): A user can choose amongst a large list of web services, devices with web output, smartphone events, etc. and when something specific happens to cause a reaction. For instance, User1 can set IFTTT to monitor the Twitter posts of User2 and when a new tweet is posted, IFTTT can send an SMS to the mobile of User1 or email that post to User1's email, etc. Interesting? It can get better. Imagine using it for networked devices, such as a networked thermostat (e.g., a Nest thermostat) or a networked light installation (e.g., Philips Hue) or a signal-producing usb device (e.g., Blink(1)), etc. For instance, you can increase the temperature at home when leaving work or set the lights to the bright setting when an incoming call comes from work. All of a sudden, it is possible to achieve automation that, albeit simple, would be next-to-impossible to do (cheaply) a few years ago.

Needless to sat that IFTTT is not the only player around. Zapier, Yahoo Pipes, We Wired Wed, Cloudwork and others - many others - are available, some for free, some at a cost. I feel certain that more will follow. I believe that what we are seeing is the early days of automation for the masses :-)

Of course, by interconnecting devices and services we are exposing an even larger part of our (real) lives to third parties. This, inevitably, implies risks. Rogue or simply irresponsible service providers may opt to sell our personal data, hackers may gain control of our smartphones, lights, etc. Our privacy may be compromised in ways that may not be immediately obvious, perhaps to directions that we wouldn't really want.

As always, innovation, in itself, is not good or bad. It is just something new. It is up to us to find the best way to use it. To strike the right balance. To shape the market into the form we want, placing the right safeguards and, ultimately, to make our lives a bit better (or funnier... or geekier...), while keeping us on the safe side.

Disclosure note (and some of the usual 'fine print'): I am not affiliated to or have received any subsidy/grant/benefit in return for this post from any of the companies, whose products are mentioned above. Mentioning, in this post, a product or a service is not meant to constitute an endorsement (as I have not, personally, used all those products). The names of the above mentioned products and services are property of their respective owners.

Sunday 9 November 2014

Compatibility: the challenge for digital archiving

'5 1/4 floppy disk' by Rae Allen
under a CC license
Today I've spent a good couple of hours in migrating some 15-year old e-mails of mine from a legacy e-mail client to Thunderbird. It wasn't a difficult process but it did need a bit of research to figure the steps needed to do the job and to try a couple of suggested alternatives. Last week I had a (shorter) adventure in getting text from Word 2.0 and Wordperfect documents. Maybe 15 or 20 years is too much time for the digital world but that won't stop me from mentioning - again - the challenges of forward/backwards compatibility in media and formats.

(Sigh)


Whoever has been using a computer for a fair amount of time is probably aware of the advice to backup their data. He/she may not be actually following that or may not even know how to do it but he/she is very likely to have heard the advice.

There are plenty of reasons to backup one's data. The main one, of course, is security against data loss due to:
  • hardware failure (e.g., hard drive damage)
  • disaster of any kind
  • user error (e.g., file deleted + trash can emptied + free space wiped, file overwritten, etc.)
  • malicious act (e.g., file destroyed by malware of any kind), etc.
For the enterprise environment, backup is (supposed to be) a must. In certain countries, the backup of specific corporate data is mandated by law. Regardless of that, corporate backup tends to be more comprehensive, maintaining data versions, multiple copies, distribution of copies across different media and locations, ideally both on-site and off-site, etc.

Corporations that depend on their data or need to keep a digital archive, inevitably, have dedicated infrastructure and people to take care of their backup needs.

Individuals, though, normally have much less. Yes, there is plenty of software that can take backups both free and commercial. Also, most OSes have some kind of in-house backup-restore utility. However, their user-friendliness and their compatibility across different platforms or, even, major OS versions is not guaranteed.

Even if a user chooses to stick to the same backup solution (which could be something as simple as a plain file copy from one disk to another) there is the challenge of the medium suitability and durability. Anyone who has been using a PC for more than 10 years is likely to have used floppy disks and/or ZIP drives and/or CDs and/or DVDs and/or external hard drives and/or flash drives for their temporary or long term backup. The problem is that some of the aforementioned media are not readily supported by a modern PC, e.g., modern PCs have neither 5¼'' drives to read the old floppies, nor parallel ports to support the original ZIP drives.

In order to be on the safe side, a user keen on archiving should, from time-to-time, migrate data from one medium to another. This is a very tedious tasks, especially if a large number of storage media is involved but let's assume that it is reasonably feasible.

The ultimate challenge is compatibility across file formats and program versions. Common formats that adhere to widespread standards are normally on the clear. Image files, for instance, such as JPEG or GIF or BMP have a long history, so files created decades ago will be displayed by virtually all modern software. The opposite doesn't necessarily apply, i.e., newer versions isn't possible to be displayed by legacy software. When it comes to formats for files not-so-frequently exchanged, however, compatibility may be an issue. Take e-mail files, for instance. Different e-mail clients tend to store e-mail in different structures. Nowadays, where e-mail clients are part of the OS, things tend to be clearer, though a few years ago there was considerably higher fragmentation (e.g., different format for Eudora, Netscape/Unix, Outlook express, Outlook, Pegasus mail, etc.). In fact, today, a large portion of our e-mail stays in the cloud, which sort of solves the compatibility problem, although it introduces a different set of challenges.

Is there a bottom line to this? Well, not really. If one needs to have data from the past, one needs to either maintain legacy hardware and software (which may or may not be possible) or put the effort to migrate the data to newer formats and media. It sounds deceivingly simple, doesn't it?

(The following video is a talk of Chad Fowler from a Scala days conference regarding 'Legacy' in software development - it is a long, not well-lit, but interesting presentation.)



Tuesday 4 November 2014

There is plenty of information around but how much of it can we practically find?

'Another haystack' by Maxine
under a CC license
The frank answer is: it depends; on many things.

First of all, I'm talking about information that is available on the internet. That excludes books that are not available online, databases that run locally, etc. More specifically, I'm talking about information that has been indexed by at least one search engine, at least at the level of general content description. I'm not differentiating among the different types of information, though.

Estimates of the size of the internet in 2013 spoke about 759 million websites, of which 510 active, which - in turn - host some 14.3 trillion webpages. Google has indexed about 48 billion of those and Bing about 14 billion. The amount of accessible data is estimated to be about 672 million Tb (Terabytes), which likely includes the indexed and part of the deep web content.

On top of that, we have the dark internet - but this is a different thing.

So, there is a lot of information indexed (and much more that lies beyond indexes). Year-by-year we are getting more-and-more used to using and relying on the internet. But how "much" useful information can we normally find?

Assuming we are talking about seeking for "general information" the main search tool is a search engine. While common search queries return tens of millions of results, most users tend to focus on the first few hits. SEO experts often talk about users sticking to the first 5 search engine hits or - at most - the results of the first page. Some disagree but still very few users go through all the results. Of course, persistent people seeking for specific information do tend to try different search queries in order to reach reasonably relevant information.

The interesting point regarding search engines and their results is that those results on the 1st page are very valuable. So the question is: if some invest in placing their content on the top of the search results, how can the user find relevant content, if that content is maintained by people not willing to invest on SEO, e.g. by a non-profit or just enthusiast individuals?

Of course, search engines use result ranking algorithms that take into consideration a very long list of factors. Content quantity and quality are amongst those factors; popularity is another, etc. However, the way those ranking algorithms work (the exact formula is kept secret) may include - e.g., in the case of Google - a ranking bonus for content of the user's Google plus contacts. They may also include a fading mechanism, where very old, possibly unmaintained, information is ranked below the recent one.Websites offering content over secure connection (via https instead of plain http) get a bonus, too, etc.

All those twists and fine-tuning are meant to help the "average user" (I guess) reach the content they need, while at the same time giving a change to content providers (including companies investing in advertising and SEO) reaching their target audience. Most of the time, advanced users will employ additional tricks to refine their searches but (I assume) any ranking algorithms work in the same way for them, too.

Needless to say, that when search engines (and Google in particular) modify their ranking algorithm, many people worry and many people get busy.

To make things slightly more challenging, content on the internet tends to change with time. Webpages may disappear due to technical reasons. Links to content may be hided in some regions due to the right to be forgotten (a very interesting topic, on its own). Or content may be removed due to a variety of reasons, e.g. copyright violations or, even, DMCA takedown notices.

The point is that finding the information one wants needs persistence, intuition, imagination, good knowledge on how the search engines work, sufficient time and luck (not necessarily in this order). The problem that remains is that this information is very likely to represent part of the whole picture.

Some will say that this has always been the case when seeking for information. True.But now that accessible information feels "abundant", the temptation to stop looking for new data after the first few relevant search engine hits is really strong.

Unfortunately, responsibility still falls onto the user, to be wary of gaps or biases of any kind and keep looking until the topic in question is properly (or reasonably?) addressed. It's not an easy task. With time, however, it's likely that we'll develop additional practical norms to handle it.


Sunday 2 November 2014

The Dunning-Kruger effect...

'Neon Jester' by Thomas Hawk
under a CC license
...or 'confidence and competence are two very different things' or, at a more direct approach, 'never attribute to malice that which is adequately explained by stupidity'.

The Dunning-Kruger effect is the condition where one feels confident for one's performance, despite the fact that one doesn't have the required skills. At the same time, skilled individuals may lack confidence because they assume that they are no better than their peers. Thus, self-evaluation tends to work in different ways in skilled and unskilled individuals, with the former being more critical to their performance while the latter failing to realise their shortcoming.

The Dunning-Kruger effect manifests itself in many parts of everyday life and could help explain several of the shortfalls we witness around us. For instance, managers that may have been selected for their confidence and overall attitude, may be prone to repeated errors of judgement, if they are not skilled on the subject matter of their business. Since a tall, multi-layer management structure is commonly adopted across many sectors, such cases might be more common than one would think.

However, the effect does have its limits and it can be mitigated or, even, avoided. The fact that it does not demonstrate itself at the same intensity across different cultures indicates that it is affected by the way people are raised and the environment they are exposed to. It also suggests that is can be addressed though the education system, which would also work on the approaches that people use for self-evaluation.

Indeed, we need to ensure that people understand the value of expertise, especially when we are talking about people that go up the management ladder. We also need to make experts more visible and accessible, in particular to people in power. More importantly, we need to find ways to promote teamwork and encourage the formation of multi-skill (and possibly also multi-cultural) flexible groups within organisations, not being afraid to use flat or matrix organisational structures, so as to ensure that problems are correctly identified and assessed and that solutions are well-conceived and implemented.

These are easy things to say but would require plenty of small changes in order to ensure that such system would survive. For example, remuneration, benefits and motivation perks would need to be allocated under a modified rational. Appraisals would also need to be carried out in a different way. Quality practices (which normally do assume that tasks are carried out by suitable experts) may also need to be adapted.

Dilbert by Scott Adams, Strip of 26/08/1992