The Moment AI Betrayed Me: How Algorithms Expose Gaps in Justice and Ethics

Almost a decade ago, AI shattered my trust—and set me on a path that eventually led me to study for my Doctor of Education with a specialization in Humane Education at Antioch. Today, I’m fascinated by the ethics, technology policy, and societal impact of AI, and I’m exploring how education can be the driving force behind a more just, ethical, and humane future in an increasingly automated world. But back then, I was much more naive about our technology. I just thought Google Photos, known as Picasa back then, was a magical sorting box for my memories, never questioning the AI driven, silent categorization happening behind the scenes. 

Then one day, I was scrolling through news articles when one item caught my attention: Google Photos’ AI-powered image recognition system had labeled people with dark skin, people who looked like me, as gorillas.

I froze. My stomach dropped. Surely, this was a glitch. A mistake. Something this advanced, built by some of the world’s brightest minds, couldn’t possibly be so profoundly wrong. But it was.

As I jumped down the rabbit hole, searching for answers, the headlines flooded in, confirming what I feared: this wasn’t just an error. It was a failure—a failure that cut deep, exposing a chilling truth about the technology shaping our world. AI didn’t just misidentify us. It misrepresented us. It misunderstood us. And, perhaps worst of all, it didn’t seem to care.

I wasn’t just looking at an algorithm’s misstep. I was staring at an echo of history, at a machine reinforcing the same racism that had dehumanized my ancestors for centuries. But this time, it wasn’t coming from an individual with cruel intentions—it was coming from technology, from a system built to “understand” the world. And in that moment, I realized something terrifying: AI didn’t understand me and may not even care about who I am.

The emotional harm of that realization was staggering. I felt invisible. I felt erased. I felt like the people who built this technology hadn’t considered me, hadn’t considered the people I love, and hadn’t considered the weight of their mistakes. I imagined little girls, much like me, who may be innocently using these technologies and experiencing harm.

Five years before that moment, in 2010, I had started PowerUp.org, an initiative designed to bridge the digital divide. I believed that diversifying Science, Technology, Engineering, and Mathematics (STEM) would open doors, that putting more people who looked like me in tech would create a better, more inclusive world. Until 2015, my focus had been clear: representation mattered.

But that moment with Google Photos forced me to see the bigger picture. It isn’t just about getting more diverse voices in the room. It’s about making sure that our humanity is recognized, respected, and embedded into the very systems that shape our world.

AI is Not Neutral—It is Power

Today, AI governs access to information, opportunity, and freedom. It determines who gets hired, who gets a loan, who is flagged as a threat. It is embedded in our healthcare systems, our justice systems, our workplaces.

And here’s what I know: AI is not neutral. It doesn’t just reflect the biases of its creators, the limitations of its datasets, or the motives of those who fund it—it exposes them. 

It reveals the systemic injustices embedded in our world, making them impossible to ignore. AI has the power to either amplify these injustices or dismantle them. And right now, in too many cases, it is doing the former.

I am privileged to be in rooms where AI is being shaped. I speak with policymakers, engineers, ethicists—people making decisions that will define our future. And we keep asking the same question:

Who decides what is fair?

Who gets to determine the parameters of the AI systems that are shaping our world? Who is at the table when algorithms are trained, when ethics policies are written, when the future is coded?

Because if people like me aren’t in the room—if we are not actively designing these systems—we are not just left out of the conversation. We are the ones who will be harmed by it.

A Seat at the Table—Or Just on the Menu?

I have spent my career at the intersection of technology, education, and ethics, working to ensure that technology is used as a force for equity rather than oppression. I have seen firsthand how AI can be used to detect bias in hiring, to personalize learning experiences, and to expand access to knowledge.

But I have also seen AI systems that disproportionately penalize people of color in job screenings. Chatbots that replicate racist and sexist language. Healthcare algorithms that deprioritize Black patients for life-saving treatment. AI-powered surveillance that criminalizes marginalized communities under the guise of public safety.

This is not theoretical. It is happening now. And the people most affected by these systems—the people with the most at stake—often have the least say in how they are designed.

If AI is going to shape our world, then our world—all of it—must shape AI.

The Path to Humane Education

This realization propelled me to pursue a Doctor of Education degree at Antioch University. I knew I needed more than just technical expertise in AI—I needed a way to ensure technology serves humanity, not the other way around. When I first encountered the field of Humane Education, it felt like a revelation. Its expansive, interconnected approach—linking human rights, environmental sustainability, animal protection, and cultural transformation—was exactly what I had been searching for. Discovering this program was like finding my people—a community of bold thinkers, educators, and changemakers dedicated to systemic transformation. 

Today, as an Antioch student, I am designing an interdisciplinary program at the intersection of Humane Education. I am learning to bridge the worlds of technology and ethics, ensuring that AI doesn’t just include diverse voices but is fundamentally built to uplift, empower, and protect them. This program isn’t just deepening my knowledge—it’s equipping me to demand accountability, drive justice, and create the kind of systemic change that technology, and society, so desperately need.

The Future We Choose

We stand at a crossroads. We can choose a path where AI perpetuates inequality, where it automates discrimination, where it deepens divisions. Or we can choose a path where AI amplifies justice, where it expands opportunity, where it centers love, dialogue, and human dignity.

The future of AI is not inevitable. It is the sum of our choices. And if we choose wisely, if we build intentionally, if we demand accountability, then AI can be more than just a tool. It can be a force for collective liberation.

The question is not whether AI will shape our world. The question is whether we will have the courage to shape it in a way that honors our shared humanity.

I, for one, refuse to leave that future to chance.

Will you?


Liza Mucheru Wisner

Liza Mucheru Wisner

Liza Mucheru Wisner is a workplace culture and talent development expert, specializing in AI, automation, and the Future of Work. An award-winning strategist and television personality, she has built a career transforming how organizations optimize workforce potential. With degrees in computer science and educational technology, Wisner is pursuing a Doctor of Education at Antioch University, focusing on AI-driven workforce innovation. She is the founder of PowerUp.org, dedicated to bridging the digital divide. As OpenSesame’s Enterprise Curator, she collaborates with publishers and customers to curate impactful learning experiences that shape the evolving workplace.