We've known for quite a long time that data as such is worthless. It's about what is being done with the data thus turning it into information.
This sounds like fun, but nowadays we receive so much information at any given time of day that we are compelled to sift through these heaps. Software for this purpose is being developed in various ways. The aim is to provide information only where one really needs it and if cleverly done, at the right moment. All this sounds perfectly obvious and logical.
Unfortunately, the reality is often somewhat more obtuse. Apart from the fact that it is not always easy to provide information precisely at the moment one needs it, there is also such a thing as familiarisation and acceptance: "Can I, as a user, believe what it says?". It gets worse when the information is subjected to (process) sensors. To translate data from these sensors into information, algorithms are developed. If one then dives deep into the data in most cases two aspects are apparent: the accuracy of the data and its interpretation.
Firstly, accuracy: how important is it to know that I am driving on a highway at a speed of precisely 120.573 km/h? 120± 5 km/h is accurate enough: we will not be caught for speeding and will be home on time. Conversely, in the 100 m sprint a thousanth of a second is an interesting precision.
Then the interpretation: if precision of individual measurements is irrelevant, then one has a choice of more convenient, cheaper sensors that accept a greater deviation from a calibrated measurement. With clever algorithms and comparisons with historical data, the final result is sufficiently accurate for the user to take a course of action on the basis of this information.
It should be noted that reliance on the information is not a given. This has to be learned. For one of our products we use Dynamic Linear Modelling. With this technique the outcome for the user often no longer relates to the underlying data. He must blindly trust that the system has come to the best and proper conclusion. We often forget that we unconsciously put our lives at risk in the hands of similar techniques, such as the computer calculations in an airplane.
In an era of automation, we desire to go further than robots. Robots are limited to electronic, mechanical and embedded software for a device to do what is expected. Automation is the use of robotics in such a way so as to reduce human intervention, preferably to zero. In the future, we will increasingly have to deal with automation in all areas. Human intervention is becoming less necessary, but often enough we still want to "understand" the processes.
Given the above, it is desirable to make knowledge and understanding of sensors and information an effective component of any specific study. This apart from the flood of general information in the form of tweets and very short messages without any substantial background information that we are exposed to.
The course "The Art of Information Management" will help us better deal with information.
It is up to developers to make sure that information provided satisfies the following criteria:
1) Simple and useful information in a desired time-frame
2) details are available, but it is mainly about preservation of the overview
3) operational information is solely focused on prevention
Statements, KPI, historical data, although interesting per se, are only an overkill at a time when one can do nothing with the data. This type of data should be presented separately so that there is a clear distinction from the operational information.
It will not be long before this subject will be compulsory not only in studies for software programmers, but also for the education of end- users.
Aart van 't Land