Hi there, Marcus is a perfect fit for this! He recently posted a blog on this topic https://theaiconsultinglab.com/what-is-responsible-ai-how-to-use-and-implement-ai-ethically/ And has some videos that touch upon the topic, but nothing dedicated fully to the topic quite yet https://theaiconsultinglab.com/videos/ He has worked with the UAE government and foturne 500 companies here in the US His tik tok also touches on this topic on occassion https://www.tiktok.com/@theaiconsultinglab Thank you!
I have spent years observing artificial intelligence systems make choices that directly affect careers of people, and to be honest, the new technology is not the reason I am having a bad night but the fact that we are implementing it without thinking about the impact. When creating AlgoCademy, I struggled with what was much more personal: how to create an AI that assesses the code of a person and does not repeat the same mistakes human interviewers do? I have heard of self-educated brilliant developers being turned away because their answers did not appear ideal in the textbook even when they were actually brilliant in actually working. The worst part to me is the fact that people consider AI governance as mere paperwork. Any algorithm that I deploy is first subject to an experiment that I refer as adversarial empathy testing which is simply consisting of me and my team attempting to test the algorithm in a manner that might harm learners. And we have our AI throwing out the pieces of the solution that are entirely valid just because they did not call out at Google or at Facebook. The entire argument on AI taking over the jobs has something off. I have been dealing with more than half a million learners, and the ones who are succeeding are not struggling with AI they are learning to be creative with it. That is the actual skills deficit that we need to be concerned about. However, the thing is as follows: it is about time we have governance structures, not five years down the line when we will already have automated away opportunities of those people who already find it hard to make their way into tech.
Building Tutorbase taught me a lot about scheduling. Our first automation saved time but tutors felt ignored, like they had no say in their own work. So we gave them preference controls and started regular check-ins. That simple change stopped the trust problems we were having with remote teams. It turns out the tech doesn't matter as much as making sure the people using it have a voice in how it's built.
Working on AI health platforms taught me one thing. At Superpower, we realized our algorithms could mess up, so we created a small board to review actual cases. They found the data problems and we fixed them. That system still works, letting us move fast without sacrificing what's fair. Don't wait for a disaster to build in checks. Do it from the start.
I built SaaS platforms for the gig economy and e-commerce, and too much automation once demotivated my teams. People lost their say in how things worked. We fixed it by showing everyone what the automation was doing and giving them a clear way to flag issues. My advice: pilot any new AI tools with a small group first and make it easy for people to give feedback. It keeps remote teams happier and their work better.