
AI for Impact
Discover tools, insights on ethics, career opportunities, funding, and upskilling to leverage AI in the impact sector. Join thousands of changemakers making a difference.
Who reads this:
The audience for AI for Impact Weekly includes professionals, innovators, leaders, and some students aged 25-50, with 60% in the U.S. and others worldwide. They are impact practitioners from sectors like tech, international development, nonprofits, social enterprise, and finance, all leveraging AI for social good. This group seeks global perspectives, ethical insights, tools, upskilling opportunities, and career resources to drive responsible and inclusive AI innovation.
Available Ad Packages
Choose an advertising package that fits your needs
Includes featured at the top of the newsletter, labeled as a sponsor post, logo up to 800 by 800 pixels or image, 100 to 120 to 250 words of text included.
Minimum offer: $160.00
Includes a sponsor post in the middle of the newsletter, an image or logo of 400 by up to 400 by 400 pixels, 70 to 120 words of text, and a clear call to action.
Minimum offer: $120.00
Recent Issues
Check out our latest newsletter content
What Damage Can it do? PCDN AI For Impact Newsletter, March 25, 2026
Every, The Only Subscription You Need to Stay at the Edge of AI (partner affiliate post)
A daily newsletter on what comes next in tech. 100K+ readers.
It is one of the best places to access amazing AI tools at a yearly affordable price, including Monologue (voice-to-text for Mac), Spiral (a mind-blowing writing partner) ,Coral (AI email assistant for mac ), and Sparkle (Mac desktop organization).
Beyond the tools, you'll get access to excellent workshops and insights on building with and using AI effectively. Even if you don't want to sign up for the paid tier, you can subscribe to their free newsletters and podcast to stay updated on AI trends and strategies.
AI for Impact Opportunities
Looking for the best newsletter platform?
We switched to Beehiiv almost two years ago.
Creating our newsletters is now a joy.
🌱 Explode Your Growth
Our subscriber count is soaring.
Beehiiv's referral tools work wonders.
💰 Easy Monetization
Start earning with Beehiiv.
We now make over $300 a month from ads and subscriptions.
Our revenue is growing fast.
✍️ Creating is Fun Again
The process is effortless.
The design is clean and intuitive.
The AI tools are a huge help.
📊 World-Class Analytics
Track your growth.
Measure what matters.
Make smart, data-driven decisions.
🌐 Build a Beautiful Website
They have a great new website builder.
It's simple and looks amazing.
👉 Sign up with our affiliate link. Get a 14day free trial. Plus, get 20% off for three months.
Beehiiv also has a free plan for up to 2,500 subscribers.
Using our link also helps support PCDN's work. It's a win-win.
The verdict is in — and it's just the beginning
I keep having the same conversation. At conferences, in working groups, over coffee with colleagues in the impact sector. The question is always some version of: how bad is it, really? This week, a jury gave part of the answer. On March 24, a New Mexico jury found Meta liable on all counts — willfully engaging in "unfair and deceptive" and "unconscionable" trade practices — and ordered $375 million in damages for failing to protect children from sexual predators on its platforms. First jury verdict of its kind against a major social media company. The attorneys general called it groundbreaking. They're right. It's also 20 years late.
The evidence has been piling up. A March 2026 study created fictional profiles of 13-year-olds on major platforms and found they were served content featuring guns, self-harm, sexualized material, and misogyny within three minutes of joining. One harmful piece for every minute watched. The algorithm didn't malfunction. It worked exactly as designed: maximize engagement, full stop. Meanwhile, a December 2025 UN Women report documented how generative AI has intensified deepfakes, sextortion, and automated harassment at scale we haven't seen before — disproportionately targeting women and girls, particularly those in public life. This is not a future risk. It's happening now.
And it doesn't stop at social media. AI systems embedded in healthcare, hiring, housing, and criminal justice are encoding historical inequities directly into automated decisions — quietly, at scale, affecting the same communities our sector exists to serve. The Leadership Conference on Civil and Human Rights put out a January 2026 report on exactly this: how AI amplifies discrimination through mechanisms that are difficult to see, harder to challenge, and nearly impossible to contest when you don't even know they're operating. Discrimination has always been a systems problem. AI just gave it a faster engine and better cover.
A 2025 Candid survey of 850 nonprofits found 64% were familiar with AI bias, and more than half feared AI could harm marginalized communities. Yet only 36% were actually implementing equity practices in their own AI use. We talk about equity constantly. We put it in our mission statements. But when it comes to the tools we're deploying inside our own organizations, too many of us are moving fast without asking who bears the cost when something goes wrong.
Some organizations are doing the actual work of accountability here. The Algorithmic Justice League combines research, art, and advocacy to expose AI harms and center the voices of the most impacted communities — it's some of the most important work in the space. The AI Now Institute publishes rigorous research on how AI power concentrates and pushes for governance with teeth — their 2025 landscape report is essential reading for anyone trying to understand who is actually shaping these systems. The Center for Humane Technology is focused on what it means to preserve human agency as AI accelerates into every corner of daily life. These are not fringe voices. They're doing the foundational work that the rest of us need to build on.
So what can we actually do? A few things. Organizations can require AI impact assessments before deploying new tools — who benefits, who bears the risk, did the affected communities have any say in the design. Funders can make responsible AI practices a condition of support and direct resources toward the civil society organizations building the evidentiary base for accountability. Advocates can get behind state-level legislation — over a dozen states are moving AI accountability bills right now — especially because the White House is actively pushing Congress to preempt those very state laws in the name of innovation. The window for shaping these rules is open. It won't stay open.
Where does this go from here? More verdicts are coming. More cases across child safety, gender-based violence, discriminatory algorithms, and AI-generated disinformation. The Future Society's 2025 survey of 44 civil society organizations found strong consensus that enforceable governance is urgent — even as most national governments move slowly or in the wrong direction entirely. The All Tech Is Human Responsible AI Impact Report documents the gap between AI principles on paper and accountability in practice. That gap is the work. Technology is accelerating. Accountability is catching up. The question is whether our sector decides to help close the distance — or waits until the harms land in our own programs and communities.
NEWS & Resources
🤖 Your Daily AI Impact Joke
Why did the AI apply for a Pentagon security clearance?
Because it kept getting flagged every time it tried to "think outside the firewall."
News
OpenAI Pulls the Plug on Sora — OpenAI is shutting down its viral AI video platform following intense backlash from actors' unions, deepfake researchers, and digital rights advocates over risks of non-consensual synthetic imagery. The move signals that not every AI product survives contact with the real world.
Federal Judge: Anthropic Ban Looks Like Punishment — A San Francisco judge said the Trump administration's ban on Anthropic after the company refused to enable autonomous weapons use "looks like punishment." A preliminary injunction hearing is underway with major implications for how commercial AI firms navigate government contracts.
The Hardest Question About AI-Fueled Delusions — Researchers still cannot determine whether AI causes delusional thinking or amplifies pre-existing conditions — a gap already showing up in courts and clinical settings with enormous policy consequences.
Why AI Hasn't Caused a Job Apocalypse — So Far — Nature finds that AI's effects on employment remain modest, and that data anxiety — not sweeping automation — is fueling most fears. Researchers warn that picture could shift fast as agentic systems scale.
White House Releases AI Policy Framework — The administration's new framework prioritizes U.S. innovation dominance and pushes Congress to preempt state AI regulations. Civil liberties groups warn it largely sidesteps individual rights protections.
💼 Jobs, Jobs, Jobs
80,000 Hours Job Board — Curated high-impact roles in AI safety, policy, global health, and beyond for people who want their careers to matter.
👤 LinkedIn Profile to Follow
Sarah Stern — Founder & CEO, Cultivate — Sarah is building the talent pipeline the social impact sector desperately needs. Through Cultivate, she focuses on hiring, training, and retaining top talent for mission-driven organizations — work that becomes even more urgent as AI reshapes what roles exist and who fills them. Her background spans digital organizing at Everytown for Gun Safety, Purpose, and ACRONYM, and she's a Webby Award winner for public service and activism.
🎧 Today's Podcast Pick
Wired's Uncanny Valley — AI, War, and the Geopolitics of Disinformation — This episode from WIRED's flagship tech podcast examines how AI has become entangled in geopolitical conflict, disinformation pipelines, and prediction markets — with hosts Zoë Schiffer, Brian Barrett, and Leah Feiger providing sharp editorial context on a story that keeps getting harder to track.
If you have questions please email phil@adly.news








