Showing posts with label users. Show all posts
Showing posts with label users. Show all posts

Sunday, 16 November 2014

Cloud automation and the internet of things

'Robot' by Christelle
under a CC license
Day-by-day, our lives become increasingly digital. With internet gaining share in our everyday routine it was inevitable someone would start interconnecting our network-capable devices (for which I think I've written about before...).

At the beginning, things were a bit basic. For instance, being able to check our cloud-based mailbox and our automatically synchronising cloud-residing files from all our devices (desktop, smartphone, tablet, etc.).

Then cloud application upped their intelligence a notch. It became - for example - possible to send somebody an email proposing a meeting date, the cloud service would add that date on the recipient's calendar and the recipient's smartphone would remind the user on time for the proposed meeting.

With more-and-more web services, programs and devices having public APIs, cross-application functionality has taken off and the user mashup potential has become evident. It may sound complicated but the fact is that it can simplify our daily lives (and - possibly - increase our geek level, too!). It is now possible to check on and control web applications in order to achieve things that in the past would require a separate web service, app or program.

Let's take IFTTT as an example (IFTTT stands for 'If This Then That', by the way - do check their website!): A user can choose amongst a large list of web services, devices with web output, smartphone events, etc. and when something specific happens to cause a reaction. For instance, User1 can set IFTTT to monitor the Twitter posts of User2 and when a new tweet is posted, IFTTT can send an SMS to the mobile of User1 or email that post to User1's email, etc. Interesting? It can get better. Imagine using it for networked devices, such as a networked thermostat (e.g., a Nest thermostat) or a networked light installation (e.g., Philips Hue) or a signal-producing usb device (e.g., Blink(1)), etc. For instance, you can increase the temperature at home when leaving work or set the lights to the bright setting when an incoming call comes from work. All of a sudden, it is possible to achieve automation that, albeit simple, would be next-to-impossible to do (cheaply) a few years ago.

Needless to sat that IFTTT is not the only player around. Zapier, Yahoo Pipes, We Wired Wed, Cloudwork and others - many others - are available, some for free, some at a cost. I feel certain that more will follow. I believe that what we are seeing is the early days of automation for the masses :-)

Of course, by interconnecting devices and services we are exposing an even larger part of our (real) lives to third parties. This, inevitably, implies risks. Rogue or simply irresponsible service providers may opt to sell our personal data, hackers may gain control of our smartphones, lights, etc. Our privacy may be compromised in ways that may not be immediately obvious, perhaps to directions that we wouldn't really want.

As always, innovation, in itself, is not good or bad. It is just something new. It is up to us to find the best way to use it. To strike the right balance. To shape the market into the form we want, placing the right safeguards and, ultimately, to make our lives a bit better (or funnier... or geekier...), while keeping us on the safe side.

Disclosure note (and some of the usual 'fine print'): I am not affiliated to or have received any subsidy/grant/benefit in return for this post from any of the companies, whose products are mentioned above. Mentioning, in this post, a product or a service is not meant to constitute an endorsement (as I have not, personally, used all those products). The names of the above mentioned products and services are property of their respective owners.

Tuesday, 4 November 2014

There is plenty of information around but how much of it can we practically find?

'Another haystack' by Maxine
under a CC license
The frank answer is: it depends; on many things.

First of all, I'm talking about information that is available on the internet. That excludes books that are not available online, databases that run locally, etc. More specifically, I'm talking about information that has been indexed by at least one search engine, at least at the level of general content description. I'm not differentiating among the different types of information, though.

Estimates of the size of the internet in 2013 spoke about 759 million websites, of which 510 active, which - in turn - host some 14.3 trillion webpages. Google has indexed about 48 billion of those and Bing about 14 billion. The amount of accessible data is estimated to be about 672 million Tb (Terabytes), which likely includes the indexed and part of the deep web content.

On top of that, we have the dark internet - but this is a different thing.

So, there is a lot of information indexed (and much more that lies beyond indexes). Year-by-year we are getting more-and-more used to using and relying on the internet. But how "much" useful information can we normally find?

Assuming we are talking about seeking for "general information" the main search tool is a search engine. While common search queries return tens of millions of results, most users tend to focus on the first few hits. SEO experts often talk about users sticking to the first 5 search engine hits or - at most - the results of the first page. Some disagree but still very few users go through all the results. Of course, persistent people seeking for specific information do tend to try different search queries in order to reach reasonably relevant information.

The interesting point regarding search engines and their results is that those results on the 1st page are very valuable. So the question is: if some invest in placing their content on the top of the search results, how can the user find relevant content, if that content is maintained by people not willing to invest on SEO, e.g. by a non-profit or just enthusiast individuals?

Of course, search engines use result ranking algorithms that take into consideration a very long list of factors. Content quantity and quality are amongst those factors; popularity is another, etc. However, the way those ranking algorithms work (the exact formula is kept secret) may include - e.g., in the case of Google - a ranking bonus for content of the user's Google plus contacts. They may also include a fading mechanism, where very old, possibly unmaintained, information is ranked below the recent one.Websites offering content over secure connection (via https instead of plain http) get a bonus, too, etc.

All those twists and fine-tuning are meant to help the "average user" (I guess) reach the content they need, while at the same time giving a change to content providers (including companies investing in advertising and SEO) reaching their target audience. Most of the time, advanced users will employ additional tricks to refine their searches but (I assume) any ranking algorithms work in the same way for them, too.

Needless to say, that when search engines (and Google in particular) modify their ranking algorithm, many people worry and many people get busy.

To make things slightly more challenging, content on the internet tends to change with time. Webpages may disappear due to technical reasons. Links to content may be hided in some regions due to the right to be forgotten (a very interesting topic, on its own). Or content may be removed due to a variety of reasons, e.g. copyright violations or, even, DMCA takedown notices.

The point is that finding the information one wants needs persistence, intuition, imagination, good knowledge on how the search engines work, sufficient time and luck (not necessarily in this order). The problem that remains is that this information is very likely to represent part of the whole picture.

Some will say that this has always been the case when seeking for information. True.But now that accessible information feels "abundant", the temptation to stop looking for new data after the first few relevant search engine hits is really strong.

Unfortunately, responsibility still falls onto the user, to be wary of gaps or biases of any kind and keep looking until the topic in question is properly (or reasonably?) addressed. It's not an easy task. With time, however, it's likely that we'll develop additional practical norms to handle it.


Sunday, 5 October 2014

Do we make the most out of (computing) technology?

Typewritter photo
'Typewriter' by Reavenshoe Group
under a CC license
Sadly, the brief answer is no. Most of us have in our hands, at home or at work, computing or other electronic hardware that would have been considered pure fiction 20-30 years ago. Although we have changed the way we live and work due to technology, the steps forward we have made don't necessarily go hand in hand with the leaps in technology we have witnessed.

Of course there are exceptions to the observation above but let me mention a couple of examples and tell me whether they sound familiar or not.

At the place that I work, all employees have PCs. Their (the PCs') primary tasks are e-mail, word-processing and printing and web browsing (not necessarily in that order). Yes, sure, so people do some statistical analysis, some DTP and some database design and some feed input to a number of databases but, still, the majority of PC time is devoted to the three things I mentioned before.  You may think that the volume of work or the quality of the output has increased. Indeed, it may. But there is still a small number of regular PC users that treats word processing software closer to a typewriter than a modern PC. OK, I'm exaggerating here but I believe you can see my point.

The other major change has been in the field of mobile devices. Each smart phone is practically a small computer, powerful enough to handle not only calls and messages but also browsing, voip and video chat and practically most of the stuff that would run on a desktop computer. Do people use those features? Yes, some people use some of those. But some others seem to have problems with that new technology. The following infographic shows an approximate breakdown of the various uses of smart phones.



According to the infographic above, new stuff (web, search, social media, news. other) account to a moderate to low 24% of the time of smart phone use. An interesting question would be if the total time interacting with smart phones is higher than before, when we had plain mobile phones. I suspect it is.

So why can't we make more and different things now that we have such computing power in our hands?

I don't really know (I'll be doing some guessing here) but here are some possible reasons:
  • Bad design on the user interface. Yes, all manufacturers and software designer call their interfaces intuitive but that is not always the case. To make things worse, I don't believe that there is the perfect user-friendly, intuitive interface. It will always need persistence, imagination and luck to get to use an interface successfully. But there are design basics that can help. Below there is an early (very) critical review of Windows 8 (which btw I rather like as OS)


  • Crappy or buggy software; Software incompatibilities; Software complexity; Inconsistency across platforms and devices; Lack of decent manuals or efficient tutorials. Lack of user training (it sounds old fashioned but in some cases it could help).
  • Software cost and/ or poor use of open source software. This particular point always bugs me. It 's fine to pay for software that enhances productivity. But why do businesses avoid to invest in open source software in a coherent way? Especially in cases where the open source alternative proves better in usability, compatibility and, well, cost.
  • Hardware restrictions. Yes, you read correctly. We have plenty of processing power but we may be having other limitations that hinder full use of that power. For instance, smart phones can do a lot but they need to be reliably connected to a fast network. That comes at a cost that in many cases is undesirable or, even, excessive. Another example is modern PCs that are powerful but often they come with the minimum possible display estate. Just adding a second monitor would boost productivity (and save on printer paper) but the majority of workplaces I know of stick to small single monitors (often badly positioned in front of the user). Another all-too-common thing is policy restrictions in the use of PCs, some of which severely impact usability, especially when that is paired with an IT department that refuses to listen to the users' needs.
  • IT departments that are overloaded with the typical tasks and don't have the resources to add new capabilities to their systems (an extra programmer could do miracles under many circumstances).
  • No reliable communication between (casual) users and developers to assist new product development or product improvement (yes, there are beta testers and developers can gather telemetry data but this is not even close in magnitude to what I refer to).
The disappointing thing is that most of the problems above are not-so-hard to address. Maybe the entire product-market-user model needs some rethinking. Maybe developers and, possibly, manufacturers, need to put more effort on durable platforms and commit to their support for longer periods. And, finally, maybe we, the users, need to be more conscious of our options/ choices and voice our thoughts/ wishes/ concerns when needed. Just saying....