Today, we talked to Natalie — our Head of Consultancy — about a topical issue that’s becoming increasingly relevant and concerning, particularly in the world of data privacy: the lightning-speed development of AI.
Programmes like ChatGPT have quickly risen in popularity and are fast becoming mainstream due to the speed at which they can perform tasks that previously would have taken a real person a lot longer to accomplish. As promising as it sounds, there’s also an element of concern regarding data security.
But what is Natalie’s take on all this recent development?
“HOW DO YOU FEEL THE CONVERSATION ABOUT AI HAS CHANGED RECENTLY?”
“HOW USEFUL IS IT FOR YOU?”
Sure, I have played around on there. It’ll create some mildly amusing rhymes in a variety of dialects — who said data protection can’t be fun? I have asked it to create learning objectives for me, a framework for a blog… and no, it isn’t what I would write, and no, I would not necessarily use it – but I work in a regulated environment that relies on my decision making, my ability to problem solve and be pragmatic. But if I worked in a different environment, perhaps AI’s progress would have me feeling threatened too.
“ARE DATA PROTECTION PROFESSIONALS BEST SUITED TO IMPLEMENT AI?”
My challenge is not with the advancement in technology — in fact, I welcome it, and I think it is important that we understand it and embrace its potential. But that precisely is my challenge at the same time: I am a data protection professional, not an AI one. I’m working hard to get up to speed and have taken seats at talks, attended webinars, and enrolled on numerous courses to aid my own professional development. But should AI really be falling into the laps of DPOs, consultants, and other privacy professionals?
At the moment, it fits — I see how some of these issues are ending up in my inbox, and I can understand why I am being asked to attend meetings that are discussing AI implementation. But is that because there is nobody that knows any better, and the DPO seems the best fit? Likely.
“DO YOU CONSIDER THE TECHNOLOGY A DATA PROTECTION RISK?”
The language used is synonymous with that in the UK (and EU) data protection legislation, the conversation is revolving around the risks to the users… sounds like a data protection issue. But there is so much more to consider. I have no qualms admitting I do not (yet) understand the technology behind self-driving cars, advanced chat bots, and the infrastructure behind them, but I do understand that there are risks around the accuracy of information, the ownership of the AI outputs, biased programming, and unclear regulation. All of which surely should be considered in those conversations that I am party to.
“WHAT EFFECTS WILL AI HAVE ON PEOPLE AND THEIR DATA RIGHTS?”
So, where does that leave us? Who is, or should be, responsible? I don’t know the answer. Yes, as privacy professionals, we can advise on the use of personal data – defaulting to the GDPR’s principles, assessing the risk to the data subject, and highlighting any potential compliance hazards. But is that enough? I can only deduce that the best way to make these decisions is to have a Computer Scientist, a DPO, and some sort of ethical expert in the room. But realistically, how many organisations have those sorts of resources, or even the finances to facilitate that? Of course, there are some — the behemoths of the tech industry are already there, making leaps in how they can use AI to leverage more and more value from their customers. But was the original point of OpenAI not to make AI accessible to the masses?