Curated Content: AI and ethics
With the FASEA exam looming, we’re receiving a lot of enquiries about ethics.
We are also at the peak of the hype cycle as far as Artificial intelligence and Machine learning is concerned.
So, what better time to ignore reality and focus on the future.
Ethics for AI
Artificial Intelligence has been a mainstay of speculative fiction since the birth of the genre, but AI has been the villain as often as the hero. It’s no wonder that people like Gemma Whelan and Bill Gates are apprehensive and a little freaked.
[We don’t share their concerns - we’re less worried about autocratic AI than we are about the inevitable Zombie apocalypse. Being an organic battery is a lot less work than running, fighting and doing stuff.]
Artificial Intelligence, as the Brookings Institute note, is already transforming the world.
Thankfully, regulators in Europe and Australia are already grappling with how artificial intelligence should be ethically and legally controlled (in a manner vastly different to Ex Machina)
On 5 April this year, the CSIRO released a consultation paper that proposed a number of principles that could guide developers, governments and billionaire industrialist philanthropists working with, or applying AI.
We’d recommend you read their impressive report, but for your convenience we’d summarise their proposals are:
These principles are at the core of organic intelligence and conduct so we can’t anticipate anything other than 100% compliance.
It’s too late to make a public submission but please read the report.
Despairing at being left in the wake of the Australian AI pioneers, the High-Level Expert Group on Artificial Intelligence (AI HLEG) published their Ethics Guidelines on 8 April 2019.
Imitation may be the sincerest form of flattery but the “Ethics Guidelines for Trustworthy Artificial Intelligence’ builds on the fundamental rights enshrined in their Charter and proposes a balanced, bureaucratic and reasonable solution.
In the common market defined as the European Union AI must be:
AI HLEG’s ‘Ethics Guidelines for Untrustworthy Artificial Intelligence’ will surely follow shortly. In fairness, the Group has made a remarkable effort to embed respect for human autonomy, fairness, harm prevention and explicability in their Guidelines.
We recommend that you read the report (before the machines block your access)
Hopefully, these guidelines will help us, as a species, avoid our failure to communicate with HAL.