Back on the 28th February 2020 Xynics were a supporting sponsor of the Data & Marketing Association's "Data 2020" event in London, where Rachel Aldighieri, the Managing Director of the DMA hosted an interesting discussion around Artificial Intelligence and Data Ethics.
The crux of this discussion was around how businesses can use data ethically, being fair and transparent, necessary and purpose limited (all principles of the GDPR) while leveraging the new technologies, like AI, to benefit the business and the wider population.
For Personal Data, data protection law outlines the principles we must all follow to ensure fair and transparent processing, but for other business data, we often do not think about those same principles, hence forth the field of Data Ethics. Just because we can, doesn’t mean we should.
Data Ethics studies and evaluates the moral problems related to data, algorithms and corresponding practices, not just asking whether what we do and how we do it is lawfully allowed, but also if it’s morally right.
You can read more about Data Ethics from Luciano Floridi and Mariarosaria Taddeo here.
After that event we said we’d post our thoughts about AI and here’s part 1 of two.
AI - a force for good?
There will always be those who fear change, it’s in our nature, but it is also true that as the generations pass, our propensity to embrace new technology is growing.
At Xynics, we’ve always had a passion for embracing technology to benefit business and individuals alike, indeed we’ve been using a form of AI for more than 14 years to process vast amounts of data quickly, efficiently and accurately utilising less human resource than our competitors. It really does enable our business and benefit our clients.
An AI can crunch vast amounts of data from a plethora of sources really quickly, saving massive amounts of time and resource, and they are generally more accurate because they largely work around binary logic. They don’t have preconceptions and they don’t have a bias, although I do wonder if an AI could learn such things depending on how humans teach it?
AI’s, like us, learn in two ways, they are either;
- Given accurate information that they “remember” and refer to; or
- They learn by trial and error. They do something or suggest something, are told it’s wrong and change their approach next time until they get it right.
Importantly, an AI is a data crunching tool, just like any other tool we might use. It will be designed and trained to undertake specific tasks of increasing complexity.
What can AI's do?
The list is quite extensive, so here’s just a few things;
- Recognise objects in images, emotions in faces, cancer’s in tissue slides or genomic conditions within DNA
- Translate speech or text, or find information on the internet, do legal case research or undertake analysis of financial data
- Track packages, navigate routes and plan optimal deliveries around times, traffic, weather and other events
- Detect malware, look for errors in computer code, encrypt data and test systems for vulnerabilities and defects
Probably one of the most impressive AI’s of late is Babylon Health’s AI which, in 2018 passed a mock-up of the “Member of the Royal College of General Practitioners” exam.
On average, a UK medical student passes this exam with a score of 72%, the AI scored 81%. It was fed with medical journals, texts, case histories and more and was able to out perform a panel of GP’s, even identifying a potentially life threatening condition in an example patient that the human GP’s failed to spot.
Why would we want this? Well, if like me, it’s increasingly hard to get an appointment with your NHS GP, then having an AI that can triage diagnose a patient and direct them to a Nurse Practitioner, the GP or Hospital with a very high degree of accuracy would ease the load on the NHS considerably. Babylon Health are a private organisation creating a virtual NHS GP service, however NHS England themselves are also investing heavily in AI to help improve social care, save lives and ensure doctors have the time they need to spend with patients.
We’ve all probably used an AI at least once. Ever asked Siri, Alexa, Bixby or your Google Assistant a question? It was an AI in the background that took that question, translated it into data, matched it with algorithms to determine what you had asked for, undertook a search of its databanks (or the internet), matched the results to the question and returned the top N answers, all in a matter of seconds.
Some AI’s like in self-driving cars have been questioned, like the “Moral Machine” experiments (which you can read more about here). In particular, if a self-driving car was presented with a scenario of approaching a crossing, where on one side of the road there is a single child, and on the other multiple elderly people, it has a choice to swerve and kill one child with their whole life ahead of them, or multiple elderly people who have lived their lives already. Does swerve and kill the child or stay on course and kill the group.
Personally I don’t see either option as valid as if it (like us) were driving in accordance with the rules of the road, and it were reading the conditions around it, it would see the crossing and be able to stop long before that decision needed to be made.
Is there a place for AI?
If you are anything like me, you’ll be thinking there probably is, providing it is properly controlled and implemented with care and consideration.
AI can do a huge amount of amazing stuff, saving lives, saving time and saving money, but the field of Data Ethics and ethics in general, must have a huge part to play in how AI is implemented going forward.