Showing posts with label information technologies. Show all posts
Showing posts with label information technologies. Show all posts

Sunday, 30 November 2014

Should we add coding to the primary education curriculum?

'Eee Keyboard-PC' by Yoshi5000
under a CC license
Yes, in my humble opinion, we should!

You may think that I have simply been a bit too influenced by the move in Finland to teach typing instead of handwriting in schools. No. In fact, although I see some advantages in introducing courses for typing instead of cursive writing, I wouldn't have gone nearly as far. After all, we still need to be able to communicate, even when electricity is not available.

With coding, however, things are different. As others have explained, coding is more of a way of thinking rather that an exercise for those that have computers. Coding, regardless of the programming language used, requires skills for describing and understanding a problem, possibly breaking it up to smaller, manageable chunks and devising a solution employing logic.

Coding can be taught almost hand-in-hand with mathematics (especially numerical analysis) and I suspect that would help the skills of kids in both fields. I wouldn't need too much time of teaching, either. Most probably, having an hour or two per week would be enough to motivate kids to engage further on the topic.

If the curriculum would also include user interface design, then coding would also blend elements of fine art, psychology, etc.

There would also be additional benefits for pupils, such as learning to collaborate across teams towards solving a particular problem, developing self-confidence in problem-solving, finding additional routes of creativity, seeking for/creating innovation in software, getting better at using computers and software, etc.

As an added bonus, coding does not require expensive infrastructures. It can be done on basic hardware (including tablets, old PCs, etc.) using free software and, today, an increasing number of households own a computer or a tablet. There is also a lot of help for coders available online, including websites with coding courses, communities of programmers, etc. Coding classes could even run without access to computers but I admit that this would rather boring for the kids.

So, yes. Let's give coding a try in schools and, who knows, maybe the coming generations will feature a higher number of brilliant coders or, at least, be better better at using logic against challenges.

The video below features Thomas Suarez (not the typical 12-year-old) giving a TEDx talk:


Sunday, 5 October 2014

Do we make the most out of (computing) technology?

Typewritter photo
'Typewriter' by Reavenshoe Group
under a CC license
Sadly, the brief answer is no. Most of us have in our hands, at home or at work, computing or other electronic hardware that would have been considered pure fiction 20-30 years ago. Although we have changed the way we live and work due to technology, the steps forward we have made don't necessarily go hand in hand with the leaps in technology we have witnessed.

Of course there are exceptions to the observation above but let me mention a couple of examples and tell me whether they sound familiar or not.

At the place that I work, all employees have PCs. Their (the PCs') primary tasks are e-mail, word-processing and printing and web browsing (not necessarily in that order). Yes, sure, so people do some statistical analysis, some DTP and some database design and some feed input to a number of databases but, still, the majority of PC time is devoted to the three things I mentioned before.  You may think that the volume of work or the quality of the output has increased. Indeed, it may. But there is still a small number of regular PC users that treats word processing software closer to a typewriter than a modern PC. OK, I'm exaggerating here but I believe you can see my point.

The other major change has been in the field of mobile devices. Each smart phone is practically a small computer, powerful enough to handle not only calls and messages but also browsing, voip and video chat and practically most of the stuff that would run on a desktop computer. Do people use those features? Yes, some people use some of those. But some others seem to have problems with that new technology. The following infographic shows an approximate breakdown of the various uses of smart phones.



According to the infographic above, new stuff (web, search, social media, news. other) account to a moderate to low 24% of the time of smart phone use. An interesting question would be if the total time interacting with smart phones is higher than before, when we had plain mobile phones. I suspect it is.

So why can't we make more and different things now that we have such computing power in our hands?

I don't really know (I'll be doing some guessing here) but here are some possible reasons:
  • Bad design on the user interface. Yes, all manufacturers and software designer call their interfaces intuitive but that is not always the case. To make things worse, I don't believe that there is the perfect user-friendly, intuitive interface. It will always need persistence, imagination and luck to get to use an interface successfully. But there are design basics that can help. Below there is an early (very) critical review of Windows 8 (which btw I rather like as OS)


  • Crappy or buggy software; Software incompatibilities; Software complexity; Inconsistency across platforms and devices; Lack of decent manuals or efficient tutorials. Lack of user training (it sounds old fashioned but in some cases it could help).
  • Software cost and/ or poor use of open source software. This particular point always bugs me. It 's fine to pay for software that enhances productivity. But why do businesses avoid to invest in open source software in a coherent way? Especially in cases where the open source alternative proves better in usability, compatibility and, well, cost.
  • Hardware restrictions. Yes, you read correctly. We have plenty of processing power but we may be having other limitations that hinder full use of that power. For instance, smart phones can do a lot but they need to be reliably connected to a fast network. That comes at a cost that in many cases is undesirable or, even, excessive. Another example is modern PCs that are powerful but often they come with the minimum possible display estate. Just adding a second monitor would boost productivity (and save on printer paper) but the majority of workplaces I know of stick to small single monitors (often badly positioned in front of the user). Another all-too-common thing is policy restrictions in the use of PCs, some of which severely impact usability, especially when that is paired with an IT department that refuses to listen to the users' needs.
  • IT departments that are overloaded with the typical tasks and don't have the resources to add new capabilities to their systems (an extra programmer could do miracles under many circumstances).
  • No reliable communication between (casual) users and developers to assist new product development or product improvement (yes, there are beta testers and developers can gather telemetry data but this is not even close in magnitude to what I refer to).
The disappointing thing is that most of the problems above are not-so-hard to address. Maybe the entire product-market-user model needs some rethinking. Maybe developers and, possibly, manufacturers, need to put more effort on durable platforms and commit to their support for longer periods. And, finally, maybe we, the users, need to be more conscious of our options/ choices and voice our thoughts/ wishes/ concerns when needed. Just saying....


Sunday, 15 April 2012

The "WE" Individuals of the Digital Era

Bee hive photo
'Bee hive 2' by Botters
under a CC license
Knowledge has always been a valuable thing. A lot of resources are invested in getting it, being studies or just hiring the right people. And there are tools available to transmit it (books, schools, the media, etc.) as well as the means to protect it (intellectual property laws, non-disclosure agreements, information concealing technologies, etc.).

Information technologies have considerably changed the knowledge landscape, though. Information tends to be better distributed or - at least - more accessible. With a bit of exaggeration, one could support that it is getting increasingly easier to become "an expert" on something.

As usual, however, there is a catch. Well, in fact, many catches:
 Reliability; Accuracy;Suitability;Completeness... and that is just to name a few.

The interesting point is that those same catches also apply to several of the traditional means of knowledge dissemination, such as books. The difference, however, is that the perception of reliability, accuracy, etc., of what we find online tends to be favourably biased. I don't know why. Maybe because when we look something up on the internet we want to get somewhere quickly and easily.

The bottom line is that the people of the digital era are no longer isolated knowledge-islands but, rather, autonomous nodes of a network: they have access to "collective knowledge" and, sometimes, contribute to it. Individuals are (digitally) backed up by many others, although the process often happens unconsciously, well hidden in the background.

At any rate, in the modern business world, that brings up to the scene some new facts.

Firstly, the "layman" should now be considered as one with access to a lot of information, possibly unfiltered, possibly biased, possibly incomprehensible to him/ her, maybe even wrong but, at any rate, information.

Secondly, the expectations from an "expert" should now be somewhat different: having some knowledge on his/ her field is just not enough anymore. Experts should be able to go much beyond the layman of today and considerably beyond the well-informed professional that hires him/ her.

I have the feeling that we are now in a transitional period, where the new and old types of common people/ experts co-exist. Understandably, that leads to confusion, especially when our expectations of the others are not matched in reality.

Daniel Gulati, a New York entrepreneur, has gone as far as to advise us: "Beware of the everyday expert" in his article in Harvard Business Review. I understand his concerns. But, on the brighter side of things, today, business people may be able to do much more on their own than what was normally possible a few years ago. That is particularly helpful for entrepreneurs, I guess, although the audience they are addressing has also become more demanding.

To me, the real challenge is how we use this potential to make our lives better, at the professional but also at the personal (societal) level.

P.S. (1) And, just to be fair, a lot of focus has been falling lately on the  knowledge-based economy and the society of knowledge. The niche of this post is really tiny and has to do with everyday people in their everyday lives. In other words, please don't extrapolate to the world economy level :-)

P.S. (2) Just because we have access to information that many others have provided, we are not necessarily less "individual" than we used to. No, we haven't reached the Borg stage, yet. Surely, many people will feel handicapped if google (or bing) goes offline but it's the access to information that will be what they'll be missing, not the people behind it :-)