👀Here’s how it happens, what to be aware of, and how we need to manage all types of algorithmic decision making.

Back early in my career, I was settling in to a senior software engineer role at a new organisation when they put our entire team at risk of redundancy. HR shared that they would be using a decision making tool that assessed the way we’d used leave – favouring people who took it as long chunks, rather than small pieces. This was meant to correspond to productivity, somehow.

I suffer from migraines, and can lose up to 2-3 days per month being ill – usually resulting in single days off. I was terrified of what that meant for my chances.

But this wasn’t just me. There are myriad other reasons this algorithm could discriminate, impacting people who

💊Need to manage their health considering long term illness or disability

🤗 Care for children, parents and other dependents

🙏 Celebrate religious festivals that aren’t observed by the dominant culture

I give credit to my then employer for their transparency in the process, and swift rectification when alerted to the issue.

❓But what happens when we automate these processes?

⚠️ Automation both speeds them up and scales them out. They happen faster, and at greater scale.

⚠️ And we may lose transparency and cut out the human in the loop that oversees them.

It’s important we carefully examine processes with algorithmic decision making – any that follow a strict set of rules – for embedded bias, automated or otherwise.

PLUS Always be sure to
✅ Be transparent
✅ Keep oversight and assign accountability
✅ Make sure decisions can be explained
✅ Have a process to raise concerns

Especially post-covid, with greater flexibility in our work patterns – are you certain that flexibility that’s enabled and encouraged in one way isn’t punished by some other process or system?

Jo Stansfield

Author Jo Stansfield

More posts by Jo Stansfield

Leave a Reply