Fri | Dec 14, 2018

Andrew Wheatley | The artificial intelligence and data protection conundrum (Part 2)

Published:Sunday | December 2, 2018 | 12:01 AMAndrew Wheatley
Wheatley
In this April 23, 2018, photo, Ashley McManus, global marketing director of the Boston-based artificial intelligence firm, Affectiva, demonstrates facial recognition technology that is geared to help detect driver distraction, at their offices in Boston.
1
2

In my previous article (Part 1), I outlined how new technologies such as smartphones, smartwatches, and other smart devices capture data about us and, with or without or knowledge, share this personal data with a myriad of different entities.

Such personal data are oftentimes accumulated and compiled from various sources and analysed by very sophisticated artificial-intelligence (AI) and machine-learning (ML) systems in an effort to make specific decisions about us. I concluded that article with the question:

“How are personal data that are derived and created from the assimilation and analysis using methods like machine learning treated by proposed legislation, and what will the effects be for all of us and the associated technologies?”

I thought this was an important scenario to interrogate because of the growing use of AI and ML technologies, and what it means for our personal data and the privacy of our data.

For the first time, the Government of Jamaica is seeking to promulgate legislation in the form of the Data Protection Act that will give the citizen full rights over their personal data. This is important in the highly technology-driven digital age, and even more so when creating the foundations of a digital society.

I want to focus on four of the many rights ascribed to citizens in the proposed legislation. You will have the right to be:

- Informed by the data controller whether your data are being processed by or on behalf of the data controller.
- Informed as to what data are being processed provided the information being processed on request in an intelligible form.
- Informed of the logic involved in a decision, where the processing is by automatic means for the purpose of evaluating matters relating to you.

CONUNDRUM TAKES SHAPE

While these provisions may seem obvious and rudimentary, if you were to apply these provisions to data being accumulated and analysed by AI systems, things start to become a little complicated. Why?

AI and ML systems are designed to function in such a way that as they process more and more information, they evolve and get smarter. In the process, the decisions they make at any given time are a far cry from what they were originally created to do. If you apply this reality to the provisions I highlighted above, a very interesting situation emerges, which leads to even more interesting questions.

Who really is processing your data? Is it a system with rules defined by humans or a system with rules that are constantly changing and defined by a machine?


Will we be able to ever know which data are actually being processed?


How will the 'logic' involved in the decision arrived at using AI or ML systems be provided if the initial rules (created by human programmers) have now evolved beyond what humans may be able to decipher or comprehend?

Here lies the conundrum that exists in a world where AI and ML meets your right to have control over your personal data. It is commonly thought that if separate, unrelated data are stored on a multiplicity of different platforms that are not directly connected to each other, this information could be viewed as relatively innocuous. This could not be further from the truth.

The fact is that most of the Internet-based systems, services and platforms that we willingly provide 'harmless' information to are connected in one or another. And in many instances, we agree to allow for the sharing of our information between these systems. In other situations, we provide the data without our knowledge because we don’t even know that the information is being collected.

There are AI systems nowadays that can determine whether a war is imminent in a particular part of the world based on an assessment of Internet searches and text messages originating from that geographic location. I was recently informed about a technology called interface-less data collection, where your data are captured by sensors that many times you never see. Think about it: Do you know all the data your fitbit or automobile onboard computer actually collects about you?

As technology becomes more and more pervasive, as the processing power and intelligence of systems continue to increase exponentially, it becomes extremely important, once we find out how our personal data are being collected and processed, that we are given the power to determine how we want that data to be treated and used.

So, when fascinating and overly convenient technologies like Alexa, Google Voice, Siri, Google Search, IBM Watson or Deepmind use AI technology to determine which ads to show you when searching the web; which specials on clothing are best suited for your demographic and online spending habits;
which answer to provide you when you ask (Alexa) a question about medication, we must question the means by which this information was derived.

We must also question how helpful, harmful or invasive this information can be when compiled and analysed by artificial-intelligence and machine-learning systems. We must always bear in mind that while convenient and cool, the effects of some of these technologies can be devastating, and, in some cases, life-threatening if left unchecked.

These technological realities serve to underscore the importance of data-protection legislation and why as citizens we must support what it is meant to do.

- Dr Andrew Wheatley, MP, is a research scientist, senior lecturer in basic medicine and biotechnology, and has published extensively in peer-reviewed international journals. Email feedback to columns@gleanerjm.com.