8th August, 2014
One of the biggest ideas explored through the recent Big Data 2014 event was that of privacy. In a research and consumer landscape dominated by the ideals of big data, establishing boundaries as to the extent of big data collection should be a given. As our societal digitisation has continued to evolve, so has our role as consumer. We are no longer just the stopping point of a product. We have become a more valuable resource than previously imagined, leading prominent data researchers and analysts to claim that the time has come to claim our eroded privacy back – but is it too little, too late?
Our understanding of big data is constantly developing. As we establish more and more data sources the data-pool we can draw from expands, allowing us to sift for longer, to uncover additional patterns and aggregate valuable insight. There are countless examples of day to day usage of big data analytics to improve everything from our power grids to medical equipment ordering to traffic light timings to virus spread predictions (though this has now come under scrutiny due to a big data problem – too many individuals aware of system, overloading the data with false positives, rendering the service unusable for some time). It will continue to act as a significant driver in global industries, in research and development, and consumer retail.
A symptom of the big data era is one of personal privacy. Many individuals are now aware of ‘opt-in/opt-out’ options when signing up to large consumer services where data is currency. But many services do not provide a choice to their users and in many cases the users themselves aren’t aware that their personal data is the very reason the service is ‘free.’
The companies utilising personal data are usually stringent in their anonymity processes but as we have seen countless times throughout the last decade, attacks on personal data stores are increasing and it is possible to re-identify data. However, the risk is only exacerbated if there is a certainty that the data can be re-identified for some gain, which would lead us to the conclusion that additional security should be utilised e.g. further data scrubbing of identifying features.
Society as a whole has been somewhat negligent of data protection for a long time, not truly understanding the implications of releasing personal information into the web. But as more and more become aware that their personal data remains active for many years from its post date, increasing questions are being asked concerning consumer privacy rights. New legislation aimed at easing consumer fears have been enacted in Australia, where businesses earning over $3m must disclose their uses of personal data; the UK Data Protection Act hasn’t been thoroughly updated since 1998, with only a small amendment in 2003 to bring it somewhat up to speed – nearly a decade ago. The disparity between government legislature and society
A recent project titled ‘I Know Where Your Cat Lives’ provided an outstanding insight into the erosions of privacy in digital society – but also our alarming willingness to embrace the digital transition without concern for our personal privacy. Whilst the project makes light of its use of publicly hosted images of cats, it highlights the disturbingly easy process of data collection, collation and utilisation that many companies consider fair game.
It is not only through supposed public data sources that our personal information is acquired. An increasingly common theme throughout the 20th Century has been massive data breaches, where organisations assuming a position of data responsibility either lose/get hacked/release sensitive documents into the wild. Assuming that such organisations intend to maintain our privacy through data protection and security, that we see so many breaches is surely a concern for anyone who utilises digital technologies, in particular those deemed sensitive sources: smartphones and tablets. Far beyond transient data based services, data derived through smart device transactions can be sold to third party data brokers for continued use (albeit only by suspect applications, but it is a practice that is seemingly hard to quell).
Ultimately, the concerns of Big Data privacy will continue until sufficient legislation is effected to ensure consumer protection throughout the multitudes of industries benefiting from consumer data. It is also easy to assume that even with sufficient legislation in place individuals will be interested in an increasingly complex situation concerning their own data, let alone the effect that mass negligence can manifest in similar scenarios. From the denial of ‘opt-in/opt-out’ systems to the populace genuinely believing that each website will ensure their data privacy, consumer data protection knowledge can be found equally as wanting as current protection practices.
Safeguarding our data futures must be a priority for the big data era; just how to legitimise its uses is another consideration.