By Yvette Schmitter, Co-Founder & Managing Partner – Fusion Collective

The exponential advancement of artificial intelligence is radically transforming the e-commerce space. Want to streamline your supply chain management? AI can help with that. Want to automatically generate personalized recommendations to encourage upselling? AI’s got you again. AI can even help you zero in on a high-potential target market and develop content that will resonate with it.

If you think for a minute about that kind of potential, you might find yourself asking, “What’s the catch?” That’s a great question because there is one. A big one.

By integrating AI into our business processes without considering the bigger picture, we often unwittingly embrace a paradigm that prioritizes profits over ethics. In the era of AI, this attitude has devastating and nightmarish consequences that go far beyond the theoretical.

Ethical AI is needed to avoid dangerous misidentifications

It’s easy to see the danger when you start to pay attention to the way that AI bias and AI-driven identification intersect. Automating identification processes has become a key use for AI in the business world and beyond. It is being applied in countless scenarios where data analytics can be used to identify patterns.

In many cases, AI identification might seem benign or even beneficial. In e-commerce, AI automatically identifies consumer trends and drives personalized recommendations; in security, it identifies suspicious patterns of activity; in medicine, it identifies anomalies in medical images.

What makes this type of AI identification a problem is the fact that it is often built upon data biases that perpetuate systemic racism, inequality, and discrimination based on race, gender, and other characteristics. If ethical AI were the goal, AI models would be trained on diverse data sets that would allow it to offer unbiased results. But that isn’t happening. New AI models are being developed and released at a furious pace, which makes it impossible to prioritize ethics. And those models aren’t just being used to help businesses make personalized recommendations.

AI misidentifications are threatening freedom, justice, and human dignity

A Washington Post study that emerged in October 2024 shows that law enforcement agencies across the US are deploying AI-powered facial recognition software with minimal oversight and devastating consequences. The Post’s findings, which were based on records requested from more than 100 public departments over 8 months, revealed that Black and Asian individuals are up to 100 times more likely to be misidentified by AI than white men.

This troubling trend is not the exception. According to a follow-up report from the Washington Post , 75 of the 100 departments involved in the study use facial recognition, with 40 reporting the technology led to arrests. Nearly half of those reporting arrests couldn’t provide details that showed conclusively that officers made any attempt to corroborate AI’s findings.

The Post’s findings are much more than a simple matter of statistics. They’re a matter of freedom, justice, and human dignity, and they exist because law enforcement is using AI irresponsibly. Rather than taking the traditional steps of finding and arresting suspects based on evidence, law enforcement uses AI as a shortcut while ignoring the dangerous risks its biases inject into the process.

The problem the Post found is commonly described as “automation bias ,” which is something many people fall prey to. Automation bias refers to the tendency to blindly trust decisions made by powerful software, ignoring its risks and limitations. When the outcomes not only destroy lives but, in some cases, are the difference between life and death, that type of blind trust is unacceptable.

AI development must be held to high ethical standards

Technology on its own isn’t the villain. It shifts into that role when it is wielded irresponsibly. Before law enforcement agencies invest in facial recognition software, they must do their homework and ensure the perfect performance being touted by developers stands true in “real life” use cases. In short, if it sounds too good to be true, it probably is. And something that doesn’t work reliably in the real world should not be entrusted with deciding outcomes that have life-altering consequences.

The casual way in which many of today’s businesses are using AI suggests we’ve strayed very far from Adam Smith’s original vision of capitalism, where success meant being the finest baker, the most skilled craftsperson, the most talented artist — all in service of delivering excellence to our customers and communities. Instead, we fully embraced Milton Freeman’s paradigm of “profit above all,” where success is measured solely by financial returns within the boundaries of rules that often fall short of true ethical standards.

Freeman’s definition of capitalism — a free market system in which economic activity is organized by private enterprise — establishes the responsibility of business as maximizing profit within the bounds of the rules. Working within those bounds alone, companies throughout history have done horrific things. When driven by ethics, however, businesses are held to a higher standard than the “bound of the rules.” This is why the continued slow roll of ethical use of emerging tech and responsible AI is dangerously concerning.

The problem in today’s business world isn’t AI itself but rather our failure to ensure that our collective values shape our technological future. We see it every day with the increased rampant misuse of data, the lack of transparency, and the overarching lackadaisical ownership, security, and control of our own data. That’s why ethics is so important.

Each of us has a moral responsibility to understand AI basics, advocate for ethical guidelines, and reject systems that do not align with our values. We can no longer afford to be passive observers, hoping that profit-driven companies will somehow prioritize ethics over earnings. Together, we can ensure that technology lifts us all.

Yvette Schmitter, Co-Founder and Managing Partner of Fusion Collective , is a trailblazer reshaping the future of technology, breaking down barriers, and building bridges where walls once stood. Today, as the Co-Founder and Managing Partner at Fusion Collective and a former Digital Architecture Partner at PwC, Yvette leads with a bold and inclusive vision: technology must serve everyone—regardless of gender, race, culture, or socioeconomic background. For Yvette, innovation isn’t about shiny new tools; it’s about unlocking potential, leveling playing fields, and ensuring underrepresented voices have a seat at the table where decisions are made.