The Algorithmic Paradox: Why AI Could Be Our Best Chance to Optimize Opportunity
The research I couldn't fit into my book might be light on the path to algorithmic fairness. If a meritocratic ideal is the goal, it will require a tool that both detects and mitigates preferences.
The Missing Research
When I was writing Reconstructing Inclusion, there was a body or research I desperately wanted to include in Chapter 4, "AI-DEI Humanity Enhanced or Compromised."
The research was so compelling, the implications so profound, that I kept trying to find space for it. However, as any author knows, even the most important material may not make the final cut due to length constraints.
That research was "Discrimination in the Age of Algorithms" by Jon Kleinberg (Cornell), Jens Ludwig (University of Chicago), Sendhil Mullainathan (University of Chicago), and Cass R. Sunstein (Harvard).
The author lineup alone should have been a signal of its significance: Mullainathan is co-author of the groundbreaking study showing that resumes with "Black American" sounding names like Lakisha and Jamal received 50% fewer callbacks than identical resumes with white-sounding names like Emily and Greg—research that has been replicated across countries, substituting ethnic-sounding names for those more familiar to each nation's dominant culture. Sunstein, meanwhile, co-authored Noise with Nobel Prize winner Daniel Kahneman, fundamentally changing how we understand decision-making variability.
What makes their algorithmic discrimination research so remarkable is that it flips our entire understanding of the AI bias problem on its head. While everyone worries about biased algorithms, they argue the real issue is that human decision-making is so opaque that discrimination thrives undetected.
The Lukewarm Reception
The significance of this research became clear to me during a conversation that perfectly encapsulates our resistance to algorithmic solutions.
As I recount in my book: "My excitement led me to share my sentiments with the head of HR of a large Fortune 500 healthcare company. His response was lukewarm. He told me, 'Yeah, we will see.' Why didn't he engage with me? In truth, I don't know. It's possible that, for him, HR systems were working well. He was successful in his role. His engagement with senior leadership peers was well managed. Perhaps introducing tools that created a different level of accountability and required letting go of a certain amount of control didn't seem appealing."
This lukewarm response reveals something profound about our relationship with transparency and algorithmic decision-making. The HR executive's hesitation wasn't necessarily about the technology—it was about what the technology would reveal about existing human processes and, as Workday is finding out, the risk companies could have by not being vigilant about their training data, especially since there is a high likelihood of biased decisions made in the past.
The Terminator Test for Inclusion
As I wrote in Chapter 4, the opening quote from Terminator 2: Judgment Day captures something essential about AI's potential: "The unknown future rolls toward us. I face it, for the first time, with a sense of hope. Because if a machine, a Terminator, can learn the value of human life, maybe we can too."
Sarah Connor's message was that if an AI could learn to value humanity, humans could choose humanity, too. This connects directly to what Kleinberg and his co-authors discovered: properly designed algorithms don't just avoid discrimination—they can become powerful tools for detecting and eliminating human bias.
Many of our organizational diversity and inclusion initiatives fail because we design them with particular subsets of group identity in mind rather than structuring our efforts with the superset of humanity in mind. This limitation becomes even more pronounced when we consider bias detection itself.
Human decision-making operates like what coders would call a black box—we can see the inputs and outputs, but the internal processing remains hidden, even to ourselves. As I note in my book, "our experiences are like the training data set that (mostly unconsciously) drives our individual and organizational behaviors. In our professions, experiences we have had across time lay the foundation for our heuristic strategies—those gut feelings that help us cut through data ambiguity and drive many of our decisions."
The Kleinberg research reveals just how problematic this opacity becomes for discrimination detection. When a hiring manager says they selected the "best qualified" candidate, they're not lying—but they likely don’t acknowledge all of the factors that actually influenced their choice. Did the candidate's name, school, previous companies, appearance, or other background characteristics play a role? It's nearly impossible to know.
Trust as a fundamental component of good leadeship
The Algorithmic Transparency Advantage
Here's where the paradox emerges: while humans are opaque black boxes, properly regulated algorithms can be completely transparent. Kleinberg and colleagues show that algorithms flip the discrimination detection dynamic entirely:
Explicit Decision Rules: Unlike humans, algorithms must have their decision-making logic explicitly programmed. Every factor considered, every weight assigned, every trade-off made can be documented and audited.
Perfect Memory and Consistency: Algorithms never forget their training data or change their approach mid-stream. Every decision can be traced back to specific inputs and logic.
Counterfactual Analysis: We can ask precise questions impossible with humans: "How would this candidate's score change if they had a different name? What if they were older or younger? A different school or no college degree? Different work experience?"
Systematic Testing: We can run controlled experiments, feeding algorithms test cases where we know the right answer and checking for discriminatory patterns.
The COST of Algorithmic Inclusion
The research aligns perfectly with the COST framework I outline in my book—the price companies must pay if keeping customers and talent is important: Care, Openness, Safety, and Trust.
Care means acknowledging the interdependence of algorithms, their makers, and their consumers. If companies expect people to continue consenting to provide data for AI training, they must ensure that machines can perform tasks that humans cannot—such as being agnostic to the identities of those they assess. As I write, "machines are agnostic to the identities of those they are programmed to assess, and the human who wrote their code and the training data used is not."
Openness becomes mandatory when algorithms are involved. As Kleinberg and colleagues argue, if an algorithm is created, makers must properly test it and ask questions like: "How would the screening rule's decisions have been different if a particular feature of the applicant were changed?"
Human biases are difficult to detect over short periods. Asking humans to calculate what would change if different human features were considered is nearly impossible.
Safety requires protecting those processed through human-machine interactions with algorithms as arbiters of their decisions. Given the potential for significant harm, companies must prioritize safety for their digital assets.
Trust becomes the cornerstone. As Sarah Connor's Terminator quote suggests, if machines can learn the value of human life, then we can instill humanity in all machines. Of course, humanity has to be philosophically aligned with ancient wisdom, not the transhuman whims of billionaire tech bros.
Three Sources of Algorithmic Bias (And How to Fix Them)
The research identifies exactly where bias enters algorithmic systems—and shows these are all human choices that can be regulated:
1. Biased Training Data
If algorithms learn from historically discriminatory hiring decisions, they perpetuate those biases. As I note, "the coding determines the outcomes of how the system detects nuances. If the training data are incomplete and the system normalizes a certain skin tone and set of features, inherent bias evolves when identifying non-normalized groups."
Dr. Joy Buolamwini's research exemplifies this: "Commonly used face detection code works in part by using training sets—a group of example images described as human faces. The faces chosen for the training set impact what the code recognizes as a face. A lack of diversity in the training set leads to an inability to easily characterize faces that do not fit the normal face derived from the training set."
2. Untennable Objectives
If algorithms optimize for faulty outcomes—like "similarity to current high performers" rather than actual job performance—they may disadvantage qualified candidates from underrepresented groups.
3. Discriminatory Input Selection
If algorithm builders choose to include some candidate predictors while excluding others—perhaps investing heavily in data advantaging certain groups—bias enters the system.
The solution in all cases: transparency requirements, documented justifications, and regular bias testing.
The Counterintuitive Case for Inclusion-Aware Algorithms
Perhaps the most provocative finding: allowing algorithms to "see" protected characteristics can actually reduce discrimination. This aligns with my observation that "focusing on a specific subset instead of emphasizing the breadth of humanity could be limiting even for those who are the intended targets of our approaches."
If recommendation letters contain racial bias, a race-blind algorithm assumes these letters mean the same thing for all candidates, potentially discriminating against minority candidates. But a race-aware (I would prefer not being “race” aware given that I think it is something we need to eliminate but we don’t have much data about appearance beyond racialization currently until we start using Diversity Atlas as a global identity data collection tool) algorithm can detect that letters predict differently for different groups and adjust accordingly, becoming a bias-correction mechanism rather than bias-amplifier.
The “Disparate Benefit” of Better Prediction
When we properly regulate algorithmic construction, we unlock what the research calls "disparate benefit"—better prediction accuracy that disproportionately helps disadvantaged groups. Their analysis of pre-trial detention decisions shows algorithmic risk assessment could reduce jail populations by 42% with no increase in failure-to-appear rates, primarily benefiting African-American and Hispanic defendants who comprise 90% of jail inmates.
This connects to my broader thesis: "AI holds the potential to broaden our insights about humanity. If used thoughtfully, it can mitigate harm in a variety of processes and systems where built-in historical biases can be examined and reconsidered with code."
The resistance I encountered from that Fortune 500 HR executive reflects a broader pattern. As I write: "Much of the average person's experience with algorithms comes from the social media platforms they engage with." Now, people are, of course, using LLMs, which were not as readily available when I completed writing Reconstructing Inclusion in early 2022.
Nonetheless, we've become accustomed to algorithmic opacity, accepting echo chambers that reinforce our existing biases rather than challenging them. We can say that LLMs have less of it, but if the high-income country-oriented training data indicates anything, it illustrates that the worldview in the system is biased toward those with a particular worldview.
As anthropologist Nick Seaver observes, algorithmic systems are "interdependent works of collective authorship by design, made, maintained, and revised by many people with different goals at different times." The question becomes: will we choose to design these systems with inclusion in mind?
The Path Forward: Humanity Enhanced, Not Compromised
The Kleinberg research offers a roadmap for what I call in my book "humanity enhanced" rather than compromised. The key insight: the future of fair hiring isn't about choosing between humans or algorithms—it's about designing systems where algorithms enhance human judgment while eliminating (or at least significantly mitigating) human bias.
As I conclude in Chapter 4: "Humans and AI go hand in hand. While some believe the technology will eventually outsmart humanity, I am voting for humans, predominantly choosing humanity, sustainability, and civility as our long-term modus operandi (with the assistance of AI built with Caring, Openness, Safety, and Trust)."
The lukewarm response from that HR executive now seems shortsighted. We have research showing how to transform AI from a potential threat into an equity engine. The technology exists. The framework is clear. What we need is the will to embrace transparency and accountability in service of more inclusive outcomes.
If a machine can learn the value of human life, as Sarah Connor suggested, then perhaps we can learn to use machines to help us see the full value of every human candidate—free from the biases that have limited our vision for far too long.
Ready to transform inclusion from concept to action to being a cultural superpower?
When inclusion becomes "the way we do things around here," it transforms from initiative to being a key part of organizational identity.
Want to be part of this transformation? Join our free Emergent Inclusion Framework virtual event. Whether you're a skeptic or champion, your voice matters in this conversation.
I hope to see you there! Tell a friend 😊
I hope this was helpful. . . Make it a great day! ✌🏿