B Lab Forces For Good Podcast — Episode 10: Can we make AI work for us?

AI is changing work faster than most of us realize. Is it time to start putting ethical guardrails around it?
In the season finale of our Forces For Good podcast, we dive into the hidden algorithms shaping our careers, workplaces, and futures. Emmy-winning journalist Hilke Schellman shares the origin story behind her book The Algorithm, while B Corp leader Richie Jones shows how business can use tech to build better jobs, not just grow profit.
Listen now to explore:
Why AI in hiring often reinforces old biases under a new name
How transparency and collective action can keep tech in check
What it takes to build a workplace where people—and ethics—come first
The future of work is here. Let’s make sure it works for all of us. Tune in now: https://lnk.to/Forces-for-Good-AI
TRANSCRIPT: Season 3 — Episode 10
This is Forces For Good, a podcast from B Lab, the nonprofit network powering the global B Corp movement. I’m your host, Irving Chan-Gomez.
Forces For Good takes a hard look at how businesses are helping to solve the biggest social and environmental challenges of our time.
We're excited to be back with season 3 to dive deep into what makes a good job. How do you feel when we bring up the topic of AI at work? If you're an A.I. developer or a C-suite executive, you may feel excited, even hopeful. But if you're an average worker, you're more likely to feel worried.
And that's no surprise. In the next few years, AI is poised to radically transform the way we work in both positive and negative ways. It's already been used to hire new candidates, monitor worker productivity, or replace jobs entirely. But we can also view this moment as an opportunity. An opportunity for workers, business leaders, and governments to come together and harness this powerful technology for good.
So how do we do that? Well, first we need to understand how AI is currently being used. That’s something my first guest, Emmy-award winning journalist Hilke Schellmann, has written an entire book about. It's called The Algorithm: How AI Decides Who Gets Hired, Monitored, Promoted, and Fired and Why We Need to Fight Back Now.
Hilke has been thinking and writing about this subject since long before it dominated headlines. So before she lays it all out for us, I wanted to know how she first came upon the topic.
Hilke: You know, I was like looking sort of for the next big project and what am I going to do? And then in November 2017, I was at a conference with consumer lawyers, nothing to do with AI or Big data. Uh, but I needed a, uh, a ride back to the train station from the conference. And, you know, I caught myself a Lyft. I got in the backseat and asked the driver how he's doing. And he said, you know, I've been having a really weird day. And, you know, in the history of me taking Lyft and Uber rides, that has never happened. And he told me that he had a job interview with what he said a robot that day.
He had applied for a baggage handler position at a local airport and he got a call from a quote unquote robot, probably a, um, you know, pre recorded questions. And, uh, you know, the voice asked him to answer these questions if he wanted the job. And he thought that was kind of, you know, somewhat unsettling, disturbing, maybe and we talked about that. I'd never heard of it.
Um, and then a few months later, I did go to an AI conference and, and somebody who had just left the Equal Employment Opportunity Commission was also talking about it, that she can't sleep at night because there's so many, uh, uh, things that are being used to check our employees at the desk and they're working for how many hours. And she felt like this could really be, have a negative impact on folks who might be caregivers. Right? So if you have parental responsibilities, you might not be able to sit at your desk 10 hours a day, right? But you might still be a high producing employee, or you may have, if you have a disability, you may have longer absentee times and a sort of just a very basic algorithm, which is look at like, well, how many hours are you at, at, at, at your desk?
So I was like, Whoa, this seems like something is happening like others are taking note and we should really be looking at this. Um, so that's how I got started. And, you know, seven, eight years later, I'm still totally fascinated by how AI is changing you know, our life in the workplace, but also any other aspects of our life.
One of the biggest revelations that emerged from Hilke's reporting is that the uneasiness and hostility workers have towards AI isn't necessarily tied to the technology itself, but rather towards how that technology is being used and interpreted by companies.
Hilke: When I started digging, I found out, for example, that like the emotion recognition of faces, a computer can quote unquote
read or infer from, like, a smile, um, that I'm happy, but am I truly happy?
Like, there's actually very little science there that, like, a frown means you're angry, um, but the companies that sell this software say, like, oh, you know, everyone, um, most people share these kinds of, uh, uh, emotions based on these facial expressions. And it turns out like the, there is really no science backing that up, that this is actually universal and that facial expressions have anything to do with success in the job.
So I feel like this is the time now that we're starting to use the technology to fight back and like ask for transparency, accountability, and ask if we use these tools that we know that they work, that they actually measure what they're supposed to measure, and that they don't, um, uh, bring in just discrimination.
Many people have noticed AI in hiring for several years now. It's frustrating to realize that you may have been rejected without a single real person looking at your resume.
Hilke: A lot of Fortune 500 companies, because they get so many, uh, resumes, right? Because it's been so easy to just upload, um, resumes, um, or, you know, apply with, with, uh, with one click, um, they get overwhelmed and, and, and they drown in these resumes. So they use the technology to sort of weed it out.
But does the technology really work is sort of the question. And we actually don't know that. Um, but what we know is that job seekers feel like, wait, I'm really qualified. I'm applying goes into this like black hole. I never hear back or I get like six months later. I get it like sort of automated rejection email.
This is weird. Like I was very qualified for this position and what I saw on the other side, like we know from surveys of like C suite level, um, executives, um, who have been asked, like, if your company uses AI for hiring, um, does the tool reject qualified candidates and almost 90 percent answered yes.
So they know that their tools don't always find the most qualified candidates, which is sort of one of the things that companies set out is the priority of building these tools. So they also know it is imprecise and not very good technology as of today. So I think that sort of, you know, if the people who use the technology also acknowledge that it's imprecise and not working really well. But then maybe it is the technology.
If companies know that their A.I. tools are deficient, why don't they just ask for better ones? Well, this leads to another problem that is rooted in how A.I. developers are funded.
Hilke: You know, AI vendors are often venture-backed, uh, uh, you know, startups and they need to go to market really quickly and they have a lot of pressure from investors to make the money back. Um, so they don't have awful lot of time to think through. Like, is this like, um, scientifically working or is this just, uh, correlation versus causation?
Like, does a facial expression, is that actually in a job interview actually causally related to success in the job? Or can I just measure people's facial expression and sort of then see are there are folks who I'm building this model off on? Are they successful in the job? Therefore, this, this has to matter, which doesn't necessarily mean that it does have to matter, right?
Um, so I think that's sort of the problem that we don't take the time to actually think through what are we measuring and it's actually, is this actually a fair measurement? Does this measurement actually work to find out what we want to find out? Um, and I think so if we have like, uh, for profit businesses that need to make money back quickly, that's sort of a real problem here.
And then we have, uh, large organizations and companies that buy the technology. Um, and they want something out of the box that works and they trust the vendor that this thing works and they want to have less labor expenses. They don't want to hire now people to oversee these AI tools. So, companies don't necessarily have, uh, yet. Uh, that the knowledge and folks that could actually test the technology and make make sure it works over long term. And I think once companies find out that maybe an AI tool doesn't work as well as advertised, the problem is, they often also have an interest to sort of just quietly drop the tools. They don't want to publicize that they made a mistake and that they had maybe, uh, bought a resume parser that had parsed out many more women than men over three years, right?
Because they're afraid that if they, uh, uh, say publicly that the tool didn't work and maybe even discriminate against people, uh, that they have a PR disaster at hand, somebody might sue them for gender discrimination, and they may have a case, right?
So, the problem is job seekers don't really know anything about how technologies are being used on them. So, the public or even folks in the industry are none the wiser. Like, if we know a tool doesn't work, we all need to know so we can put pressure on the vendors to build better tools and have organizations sort of share this knowledge.
So then that, you know, organization B doesn't buy what organization A has already binned. So I think there can be really, we need to break this cycle and we need to demand like transparency and accountability. And I think with that, we can push vendors to do better and companies to, to buy better products.
So far, we've focused on A.I.'s role in the hiring process. But what about after you get the job? How should A.I. be used at the workplace? Can it actually benefit workers instead of just replacing them?
Fortunately, some business leaders are tackling these questions head-on, including Richie Jones, CEO and Founder of the B Corp certified, e-commerce company, vvast. Basically, his company helps build an online presence for brands.
Richie: In effect the, um, business sits behind. The actual consumer experience we're, we are the ones creating that. So what you see on Facebook, meta, Instagram or that portfolio all the way through to the warehouse, experience, customer service, any interaction with that brand. Um, we, we have, we are involved with and our, our kind of combination of tech and then building some of the best in class, um, teams, coupled with the fact that B Corp certified is really positioned us in a really kind of unique and exciting place in the market.
Richie's interest in A.I. is personal. He believes that tech business leaders like himself have a responsibility to shape our relationship with A.I. for the benefit of future generations.
Richie: Having, uh, already overseen the transition, um, in my generation of the internet coming in and basically seeing, in effect, we made a bit of a pig zero of how that's been implemented in some senses. Pig Z
, by the way, for the non-UK-based listeners, um, is a, is a, you know, it's not been the ideal way of rolling out technology. You know, if you look at how social media's played out so far, and I think, um, yeah, the context on me personally is I, I mean, I surf and mountain bike and I feel like I've got a real responsibility with the planet and I've also got kicked, and I'm really keen to leave a strong legacy of how we implement AI, particularly if we're in tech, we've got this opportunity to shape what that AI future kind of looks like. And I think, um, the deployment of it and how we are going to implement it going forward with our tech that we've got and so on, is with using sort of those core...
One way he does this is by using his influence to gently nudge brands in the right direction.
Richie: What's so interesting is that there are certain things e-commerce and brands have been doing for years that drive incremental revenue—but even before AI, some of them were ethically questionable, right?
One example we’re all familiar with is the abandoned basket email. You’re on a website, you add something to your cart, and because you’ve bought before, the brand knows who you are. Twenty minutes later: “Hey, you left something behind.” These emails are super successful, and sure, AI and big data can make them even more so.
But as a B Corp, we’ve committed to a mission that challenges us—and the brands we work with—to ask: is this ethically right? Are we nudging people to buy things they don’t actually need?
That’s where this questioning of ethics started. And you can’t beat the B Corp application process for making you confront your business model. How are you working with your team on this? How are you upskilling them? Is this something that comes up regularly?
Richie: When I say the B Corp application process was confronting, it really was. We realized that most of the apps and software we use—from Apple to Adobe to Google—have AI running behind the scenes. Sometimes you opt in, sometimes you don’t.
So, we created a forum with our team to talk openly about how we're using AI. And we actually hosted an “AI amnesty.” We said: Look, who’s using ChatGPT? Who’s playing around with Gemini? Everyone came forward and shared how they were using these tools. It turned into a kind of internal audit.
Then we created—okay, I hate the name, we need a better one—the AI Steering Committee. It’s a clear structure: we meet monthly, we talk about how AI is being used, whether it’s helping or harming, whether anyone feels threatened by it.
The committee also serves as a testing ground. If someone wants to try a new AI tool, it gets vetted there first. This kind of transparency is crucial. We're just at the beginning stages of AI adoption, but in three years, as agents get more sophisticated, it’s junior roles that may feel most threatened. That’s why we need to upskill those folks now. Transparency and accountability. Richie and Hilke bring different expertise, but they share a common mindset.
In the spirit of transparency, Richie is also sharing his AI insights beyond his team. He co-founded the Ethical AI Coalition.
Richie: With the Ethical AI Coalition, we’re actively working to open-source our thinking. The inspiration? Volvo. They open-sourced their seatbelt patent so any manufacturer could use it—for free or at very low cost—which saved thousands of lives.
That’s the goal: build a collaborative coalition of individuals, businesses, and organizations that care about ethics, people, planet, and profit—and place that at the heart of how AI is deployed.
We’ve also been working with the Align AI Collective. We actually commissioned a research paper that found around 80% of participants expressed discomfort or suspicion about AI. That paper is open source as well.
It’s clear—we’re in for a bumpy five, maybe ten or fifteen years as we rebalance. The workers most exposed are those in manual or routine roles. A responsible transition will require not just business leadership, but action from governments.
Here in the UK, the last election barely mentioned this transition in party manifestos. That needs to change. Right now, the average voter may not be alarmed because layoffs haven’t hit yet—but they’re not far off. AI and its impact need to become a mainstream political issue. We can’t rely on goodwill alone to ensure AI is used ethically. We need regulation—laws informed by research.
At the end of 2024, B Lab Co-founder Andrew Kassoy published an opinion column in The New York Times, specifically addressing ChatGPT creator OpenAI—but the guardrails he proposed could apply to any company building AI into their strategy.
The core message? No one can tackle this alone. Not a worker, not a CEO, not even a government. We all need to be watching how AI is being implemented—and demanding transparency and accountability.
Workers can’t hold companies accountable if they don’t know how AI is being used. And companies can’t make smart, ethical choices if they don’t understand the technology behind the tools.
Some governments are stepping up.
Hilke: We’ve seen progress in the European Union with the EU AI Act. They’ve removed some of the worst tech practices—like facial emotion recognition in hiring tools. That’s now outlawed. So is using personality profiling to make high-risk assessments in employment.
Some companies voluntarily let go of those tools earlier, but now the law backs that up. If we could adopt some of these regulations in the U.S., it would be a huge step forward. The future of work is here. Let’s make sure it’s fair and equitable for everyone.
AI can be a force for good. But we have to get it right.
We hope this season helped you understand what makes a good job—and what you should expect from your employer.
That’s a wrap for Season 3 of Forces for Good. Follow us on social media for updates on B Lab and what’s next for the show.
To learn more about B Corps and purpose-driven companies, visit BCorporation.net. Subscribe, rate, and review Forces for Good on Apple Podcasts, Spotify, or wherever you listen. Your support helps us reach more listeners like you. The views and opinions expressed in this podcast are those of the interviewees and do not reflect the positions or opinions of the producers or affiliated organizations.
This podcast is brought to you by B Lab, in partnership with The Gates Foundation. Special thanks to Sherri Jordan for coordination.
Forces for Good is produced by Hueman Group Media.
I’m your host, Irving Chan-Gomez. Thanks for listening—and see you next time!