What has AI ever done for us? - Part 2 of 2

Apr 29, 2020

Back on the 28th February 2020 Xynics were a supporting sponsor of the Data & Marketing Association's "Data 2020" event in London, where Rachel Aldighieri, the Managing Director of the DMA hosted an interesting discussion around Artificial Intelligence and Data Ethics.

The crux of this discussion was around how businesses can use data ethically, being fair and transparent, necessary and purpose limited (all principles of the GDPR) while leveraging the new technologies, like AI, to benefit the business and the wider population.

For Personal Data, data protection law outlines the principles we must all follow to ensure fair and transparent processing, but for other business data, we often do not think about those same principles, hence forth the field of Data Ethics. Just because we can, doesn’t mean we should.

Data Ethics studies and evaluates the moral problems related to data, algorithms and corresponding practices, not just asking whether what we do and how we do it is lawfully allowed, but also if it’s morally right.

You can read more about Data Ethics from Luciano Floridi and Mariarosaria Taddeo here.

In part 1 of this series I looked at the good stuff AI can do but there is a potentially darker side!

AI  -  do we want it?

AI’s churn a lot of data.

As with any analytics, the more data you can put in, the better the result. The likes of Amazon and Google already know a phenomenal amount about people who use their services and the content they watch, view or listen to. They, along with AdTech networks are building incredibly detailed profiles of us all and as the generations pass, the level of that detail grows with the increasing adoption of services.

There will be people who argue that AI’s are taking jobs, saying we could just hire more GP’s and medical staff instead of using an AI to triage patients. We could pay people to look at images, test results, DNA sequences and perform repetitive tasks, instead of replacing humans with machines.

Yes, we could, but the reality is humans don’t work 24x7, they get tired and make mistakes or miss things. AI’s don’t, they just keep going on, and on, and on until the job is done.

In reality, yes jobs will be lost to AI, but replaced with more skilled roles, which are likely to stem from the very jobs that are replaced.
Someone needs to train the AI and validate its work. Someone needs to maintain it, and over time those people will learn new “transferrable” skills and probably get paid more for it!

Image courtesy of Google Maps

Probably the largest debate would be the use of AI in Law Enforcement and Security

Back in 2018, the UK Met Police wanted to use an AI to start predicting serious violent crime by drawing upon data from the police national computer, statistics, past crime and mental health records among others. That AI would “profile” everyone and identify those who might commit or become victims of violent crime.

GCHQ have stated that UK spies will need to use AI to counter a range of threats as adversaries are increasingly likely to use technology for attacks in cyberspace and on the political system (such as deep fakes).

It is unclear that data the intelligence services propose to use, but given strong past indicators of vast data collection and laws around data like the Regulatory and Investigative Powers Act designed to enable intelligence services access to private communications data, it is highly likely that any profiled data would include a significant amount of information about the population here in the UK, and abroad.

And here is where I am torn. I really do see how AI could benefit local and national security through policing and intelligence, but granting our country access to the most intimate details of our private lives (like internet search history, shopping habits, travel plans or even health data) on a large scale is a massive intrusion of privacy and breaches our fundamental human rights. Ironically it would be allowed under data protection law owing to the exclusion from GDPR and the Data Protection Act, processing for the detection and prevention of crime.

Even with a “nothing to hide nothing to fear” mentality, it’s not just me who has concerns here, the Royal United Services Institute (Rusi) think tank also argues that the use of AI in intelligence services could give rise to new privacy and human-rights considerations, requiring new guidance.

Tin Hat Time?

20 years ago,

  • people would have been horrified if one organisation would record and profile every single search we made in the library, every book we read, how long we read it for, who we recommended to read it, or if we later went to buy it.
  • They would have thought we were mad to say that based on that, adverts in the street would change to match our interests and we’d get more letters about stuff we were interested in.
  • They would have been highly wary if they thought that every photo we took, message we sent to friends, even the friends we kept were all shared with that organisation.

Today the vast majority of us openly share our lives online through Social Media, and most of us have a fairly detailed profile even on LinkedIn.
We're all either Apple or Android users on our phones, so likely share lots of location data and usage data, a lot of it shared with Google in some way;

Google’s global intelligence network is driven by an AI, one that’s hooked into our websites, shopping, even our tv’s and homes, learning about us and using that data to make our lives better and drive Google’s profits.

Do I still think there is a place for AI?

In a word, yes.
AI’s aren’t going to be doing anything we cannot already do. They’re just doing, or going to do, those same things quicker, more easily, more efficiently and more accurately.

I strongly believe it is a generational thing. Those that have and are growing up in the digital world, will welcome this new technology as it is rolled out. Those more used to privacy and less welcoming of technology will not trust it and will reject it.

Thankfully, those that legislate and control it, are also of a similar generation to thost that will embrace it, recognising the concerns and through Data Ethics and other means, will regulate AI’s use.

AI can and will do a huge amount of amazing stuff, saving lives, saving time and saving money, but as I said before;

Just because we can, doesn’t mean we should!

Contact Us

If you want to discover how you could do more with
your data, get in touch with Xynics.

Get In Touch