What We Do How We Innovate Our People Articles Case Studies Contact Us Take the innovation health check
Are you inclusive if you’re not ethical?

Are you inclusive if you’re not ethical?

Inclusivity refers to the practice of including and valuing individuals from diverse backgrounds and perspectives. Ethics refers to a set of moral principles and values that govern behaviour.

The tech world is developing at a pace we simply can’t keep up with. For all the good it can bring, there’s a whole host of, well, not so desirable impacts. And that’s possibly the biggest challenge we’re facing when it comes to inclusivity and ethics in tech innovation; not the development of the tech itself, but the issues we face as a result of it. 

The Collingridge Dilemma

“Each technological change brings progress. However, in many cases, it also causes problems of another variety.” – The Collingridge Dilemma 

First proposed in 1980 by David Collingridge, and now more relevant than ever, the Collingridge Dilemma asks us to think more carefully about the innovations we’re so quick to adopt. Just look at Uber, who monopolised the taxi industry. Yes, the new app might make taxi journeys faster, easier and cheaper for the user. But what of the unintended consequences? The smaller taxi companies who couldn’t offer the same 24/7 on-demand service? The Uber drivers themselves who got more consistent work but ended up sacrificing decent pay for company profit? Who has actually benefited from Uber in the long run?

The problem with adopting tech is that we can’t take it back. As Sergio De Dios González says;

“Once a technology has been implemented, it’s extremely difficult to reverse this decision.”

The impact of new tech is irreversible

All you need to do is look at mobile phones. We can: contact people anywhere in the world, navigate cities, sign work contracts, find the love of our lives and open bank accounts at the click of a button. We’re now more ‘connected’ than ever before.

It’s life on demand. And easily misused. 

Privacy issues, identification theft, data loss and harvesting, security risks; we’ve become more at risk from scams, harassment, bullying… In the name of ease and accessibility. A life of connectivity and limitless opportunity – for those who can afford it. The ethics of mobile phone usage is dubious, as is the ‘inclusivity’ it offers us. 

PETs, policies and programs

Privacy, autonomy, health, social justice, democracy. The Collingridge Dilemma questions whether we could have predicted these problems caused by emergent tech. And if so, what measures could we have put in place to mitigate the risks?

Well, UKRI and Innovate UK have outlined the practices they’re exploring to help create a standardised digital ethics practice, which will support and shape future business innovation. Privacy Enhancing Technologies are a big part of that;

“PETs are a set of technologies that use different computational and mathematical approaches to extract data value. In doing so PETs unleash the commercial, scientific and social potential of data, without jeopardising privacy and security.”

The University of Edinburgh opened The Centre for Technomoral Futures in 2020 with an aim to bring technical expertise together with moral expertise in the name of “sustainable and just models of innovation”.

Unicef has nine living principles “designed to help digital development practitioners integrate established best practices into technology-enabled programs”:

  1. Design With The User
  2. Understand The Existing Ecosystem
  3. Design For Scale
  4. Build For Sustainability
  5. Be Data Driven
  6. Use Open Standards, Open Data, Open Source, And Open Innovation
  7. Reuse And Improve
  8. Address Privacy & Security
  9. Be Collaborative

Whilst we are seeing a promising uptick in risk mitigation practices…

We’re still a long way from ensuring tech innovations are ethical and inclusive.

There are so many emerging solutions, but finding ways to get the majority of people and businesses to implement them isn’t so easy. Especially when most don’t even see eye to eye on the threats we’re facing.

So, how do we define what poses a ‘real’ risk? It depends on the people, their purpose (Profit? Social justice?) and their principles. Because their principles (ethics) govern their behaviour and the extent to which they will consider, and then adopt, specific mitigation practices within their operations.

And as highlighted in the Future of Privacy Forum 2020 white paper, industry frontrunners hold considerable power regarding the uptake of inclusive and ethical practices:

“Much of the important digital activity today takes place on top of technology structure operated by a number of leading companies. Access to data is enabled or restricted by decisions those organisations make and the technical or contractual requirements they establish.” 

So, what’s the real solution?

Acknowledgement, discussion, action.

As the UKRI says, practising digital ethics can’t simple be a tick box exercise, “it’s not just a list of technologies (however important) but involves asking questions about: privacy, autonomy, security, dignity, justice and power”

Establishing ethical frameworks could play a key role in that, as “a way of structuring your deliberation about ethical questions.” Chris Fabian and Robert Fabricant suggest questions like:

“Is this platform/product actually providing a social good? Am I harming/including the user in the creation of this new solution? Do I even have a right to be taking claim of this space at all?”

The Collingridge Dilemma suggests that if we’re to build ethical standards into our innovation practices, we must first define what risks and threats a solution poses, before we can take action. We must first reconcile exciting new tech with our ethical and inclusive obligations. After all, can something really be considered innovative if it doesn’t offer a better, faster, cheaper, easier solution to the majority? 

Written by Maia Broadley, writer and creative contributor for Groundswell Innovation

How is your innovation health?

Take the test