Why EU will find it difficult to legislate on AI6 February 2020 | 12:53 | EUobserver
Artificial Intelligence (AI) – especially machine learning – is a technology that is spreading rapidly around the world.
AI will become a standard tool to help steer cars, improve medical care or automate decision making within public authorities. Although intelligent technologies are drivers of innovation and growth, the global proliferation of them is already causing serious harm in its wake.
Last month, a leaked white paper showed that the European Union is considering putting a temporary ban on facial recognition technologies in public spaces until the potential risks are better understood.
But many AI technologies in addition to facial recognition warrant more concern, especially from European policymakers.
More and more experts have scrutinised the threat that 'Deep Fake' technologies may pose to democracy by enabling artificial disinformation; or consider the Apple Credit Card which grants much higher credit scores to husbands when compared to their wives, even though they share assets.
Global companies, governments, and international organisations have reacted to these worrying trends by creating AI ethics boards, charters, committees, guidelines, etcetera, all to address the problems this technology presents - and Europe is no exception.
The European Commission set up a High Level Expert Group on AI to draft guidelines on ethical AI.
Unfortunately, an ethical debate alone will not help to remedy the destruction caused by the rapid spread of AI into diverse facets of life.
The latest example of this shortcoming is Microsoft, one of the largest producers of AI-driven services in the world.
Microsoft, who has often tried to set itself apart from its Big Tech counterparts as being a moral leader, has recently taken heat for its substantial investment in facial recognition software that is used for surveillance purposes.
"AnyVision" is allegedly being used by Israel to track Palestinians in the West Bank. Although investing in this technology goes directly against Microsoft's own declared ethical principles on facial recognition, there is no redress.
It goes to show that governing AI - especially exported technologies or those deployed across borders - through ethical principles does not work.
The case with Microsoft is only a drop in the bucket.
Numerous cases will continue to pop up or be uncovered in the coming years in all corners of the globe – given a functioning and free press, of course.
This problem is especially prominent with facial recognition software, as the European debate reflects. Developed in Big Tech, facial recognition products have been procured by government agencies such as customs and migration officers, police officers, security forces, the military, and more.
This is true for many regions of the world: like in America, the UK, as well as several states in Africa, Asia, and more.
Promising more effective and accurate methods to keep the peace, law enforcement agencies have adopted the use of AI to super-charge their capabilities.
This comes with specific dangers, though, which is shown in numerous reports from advocacy groups and watchdogs saying that the technologies are flawed and deliver more false matches disproportionately for women and darker skin tones.
If law enforcement agencies know that these technologies have the potential to be more harmful to subjects who are more often vulnerable and marginalised, then there should be adequate standards for implementing facial recognition in such sensitive areas.
Ethical guidelines – neither those coming from Big Tech nor those coming from international stakeholders – are not sufficient to safeguard citizens from invasive, biased, or harmful practices of police or security forces.
Although these problems have surrounded AI technologies in previous years, this has not yet resulted in a successful regulation to make AI "good" or "ethical" – terms that mean well but are incredibly hard to define, especially on an international level.
This is why, even though actors from private sector, government, academia, and civil society have all been calling for ethical guidelines in AI development, these discussions remain vague, open to interpretation, non-universal, and most importantly, unenforceable.
In order to stop the faster-is-better paradigm of AI development and remedy some of the societal harm already caused, we need to establish rules for the use of AI that are reliable and enforceable.
And arguments founded in ethics are not strong enough to do so; ethical principles fail to address these harms in a concrete way.
International human rights to rescue?
As long as we lack rules that work, we should at least use guidelines that already exist to protect vulnerable societies to the best of our abilities. This is where the international human rights legal framework could be instrumental.
We should be discussing these undue harms as violations of human rights, utilising international legal frameworks and language that has far-reaching consensus across different nations and cultural contexts, is grounded in consistent rhetoric, and is in theory enforceable.
AI development needs to promote and respect human rights of individuals everywhere, not continue to harm society at a growing pace and scale.
There should be baseline standards in AI technologies, which are compliant with human rights.
Documents like the Universal Declaration of Human Rights and the UN Guiding Principles which steer private sector behaviour in human-rights compliant ways need to set the bar internationally.
This is where the EU could lead by example.
By refocusing on these existing conventions and principles, Microsoft's investment in AnyVision, for example, would be seen as not only a direct violation of its internal principles, but also as a violation of the UN Guiding Principles, forcing the international community to scrutinise the company's business activities more deeply and systematically, ideally leading to redress.
Faster is not better. Fast development and dissemination of AI systems has led to unprecedented and irreversible damages to individuals all over the world. AI does, indeed, provide huge potential to revolutionise and enhance products and services, and this potential should be harnessed in a way that benefits everyone.
© 2020 All rights reserved. Citing Focus Information Agency is mandatory!
All opinions, assessments, and statements, expressed in interviews, are personal and Focus Information Agency bears no responsibility for them.