If we want AI to serve the public good—especially in mental health and addiction treatment—we need to stop building it in boardrooms and start co-creating it in the field. Tech without context is useless, and in some cases, dangerous. The fastest way to accelerate AI for social good is to get it in the hands of frontline practitioners who understand the human cost of delay, misdiagnosis, or generic treatment. In my world—running an addiction recovery center—the biggest gap isn't data, it's real-time insight. We don't need AI to write more reports. We need it to flag relapse risk from behavioral patterns, spot gaps in continuity of care, and help overworked clinicians prioritize the right patients at the right time. One strategy that works? Cross-sector collaboration. We've started working with mental health-focused tech groups who are open to being in the trenches—sitting with clinicians, listening to what actually helps, and building tools that integrate into existing workflows, not replace them. The magic isn't in the algorithm—it's in how well it respects the human environment it's trying to serve. We also need policy support that protects data integrity but doesn't strangle innovation. AI in healthcare dies in red tape unless regulators and practitioners work together early. Bring behavioral health experts, patients, and AI developers to the same table. That's where responsible innovation happens. Bottom line: If AI is going to drive real social good, it has to earn trust where it matters most—on the ground, with people who don't have the luxury of getting it wrong.
Make it open. The quickest way to accelerate AI for social good is to break it out of the labs and into the hands of the people already solving real problems. Teachers, healthcare workers, local councils, non-profits. They don't need hype. They need access, support, and tools that actually work on the ground. The strategy? Partnerships between tech companies and grassroots organisations. Not just top-down grants, but co-created solutions. Build with them, not for them. And keep it transparent. Open models, open data, open results. That's how you scale trust and impact at the same time.
A stranded American couple missed their international flight—until our driver made it to the airport from Polanco in 21 minutes flat, navigating roadblocks with real-time AI-assisted routing. That moment opened my eyes to how AI, when placed in the hands of local experts, can become a tool for social good far beyond logistics. At Mexico-City-Private-Driver.com, we don't just drive—we act as bridges between cultures, systems, and emergencies. And AI helps us do it faster, safer, and with more compassion. I believe one powerful way to accelerate the use of AI for social good is to embed it into hyperlocal human networks. In our case, that means training drivers—many of whom are family breadwinners—with AI-enhanced tools that interpret traffic patterns, air quality alerts, or even civil protests in real time. These aren't just routes. They're lifelines. They help us get patients to hospitals, solo women travelers to safe hotels, and visitors to immigration offices on time—without stress or ambiguity. By combining open-source models with hyper-contextual data—from Twitter alerts to WhatsApp groups—we've created a system that adapts faster than any navigation app alone. It's not about the tech. It's about how tech supports trust, in a city that's often chaotic and overwhelming for newcomers. To scale this kind of impact, we need partnerships between small operators and larger AI research teams. Imagine if local driver fleets across Latin America shared anonymized travel and safety data to improve AI fairness or map underrepresented urban zones. The ripple effect would be massive—from safer commutes to more equitable city planning. In short: AI for social good begins on the ground. And in Mexico City, it often begins in the back seat of a well-driven car.
One idea to help accelerate the use of AI for social good, particularly in addressing animal welfare, is to develop a community-based lost pet alert system powered by AI and real-time technology. By combining AI-driven image recognition, geolocation data, and crowdsourced updates, this system could drastically improve the speed and success rate of reuniting lost pets with their owners. For example, when a pet goes missing, an owner could upload a photo and details into an app that uses AI to scan shelter intakes, social media posts, and public camera footage for matches. The system could also send alerts to nearby users, volunteers, and local animal organizations, creating a real-time network of support. This kind of tool would be especially impactful in areas where access to traditional resources like microchips or vet records may be limited. Strategic partnerships between animal shelters, tech developers, and local governments would be key to making this initiative effective and scalable, ensuring AI is used in a way that directly supports the wellbeing of animals and their human families.
One way to accelerate AI for social good: Give nonprofits plug-and-play access to high-quality AI tools without the enterprise price tag or the learning curve. Currently, many AI tools are designed for tech teams rather than mission-driven organizations. That's the bottleneck. The groups doing real work—food banks, clinics, legal aid networks—don't need a model they have to train. They need prebuilt templates that help them do more with less today. What would help: - Partnerships between AI providers and civic tech orgs - Templates for use cases like triaging services, automating form fills, or translating legal documents. - Grant-backed sandboxes where nonprofits can test AI without risking compliance or budget Bottom line: If we want AI to solve real problems, we need to stop thinking of it as a research toy and start packaging it like a wrench. Make it simple, useful, and available to the people fixing what's broken. That's how impact scales.
I believe that those tech, government, and non-profit partnerships are the key to increasing the use of AI for social good. One approach is to apply AI-driven applications to tackle such pressing social challenges as climate change, access to medical care, and disparity in education. The convergence of AI and the SDGs can contribute to harnessing the power of AI to create and scale data-driven solutions for global challenges. For example, AI technologies can be used to improve traffic planning to reduce congestion and emissions, which directly supports environmental sustainability. At LAXcar, we're considering additional AI-enhanced tools to maximize our fleets, which are more efficient, both in carbon emissions and gas waste. Working with environmental organizations could help to scale these technologies and make a larger impact on how AI is used to reduce transportation emissions. To do that and use AI in a way that's going to truly have a positive impact on the world, I also really believe that transparency and ethical frameworks are necessary. We need to develop AI systems in an inclusive manner, ones that take into account what society wants and avoid bias. If we built alliances between schools and partners, we could increase AI literacy, develop and train diverse teams, and engage local community members with local issues that can be addressed by AI.
One way to accelerate AI for social good is by building collaborations between domain experts and AI teams. When those working directly with real-world problems—like healthcare workers, educators, or emergency responders—collaborate with AI specialists, solutions become more grounded and practical. Open-sourcing relevant models and datasets can also make a big difference. It encourages broader participation and lets smaller teams innovate without heavy upfront investment. Another effective move is creating shared platforms where organizations can test and refine AI tools in real scenarios—focusing on safety, fairness, and transparency along the way.
Simple low-cost solutions that start with community pain points are a practical way to accelerate AI for social good. Real impact emerges through strategic collaborations between product teams and on-ground workers who include nonprofits city programs and local educators to create problem-solving tools for daily operational needs. The simple AI application brought a 30% improvement in delivery efficiency along with reduced family denials within one month. The focused AI application delivered substantial impact which allowed team members to perform their core duties of assisting people. Strategies That Work First establish partnerships with domain experts before proceeding. Don't create solutions independently because collaborative development leads to better results. Social workers and clinic staff and school administrators should receive your direct involvement. Examine the complicated situations which these professionals need to handle. Organizations should employ open-source tools while using cloud credits as their resources. The combination of basic guidance and Hugging Face and Google Colab and Airtable automations enables organizations to achieve substantial results without needing financial resources or engineering staff. Design for trust. Data privacy is non-negotiable. The model needs to demonstrate its assistance capabilities instead of replacing human work while maintaining data privacy through anonymization techniques and participant control. The Bigger Picture To achieve greater impact organizations need standardized templates that provide easily modifiable starter kits which others can use. A standardized AI starter package that includes practical models and ethical guidelines alongside real examples for food housing and education would allow communities to achieve faster progress. Social good initiatives using AI should begin directly in community settings instead of starting in laboratories. Such programs begin in shelters as well as classrooms and clinics through the efforts of caring teams and technology that maintains their organizational goals.
One way to accelerate the use of AI for social good is to embed it into low-barrier tools that meet people where they already are, especially in underserved or emotionally sensitive areas like mental health, education, or public service. At Aitherapy, we've seen how something as simple as 24/7 access to AI-powered emotional support can make a real difference for people who can't afford or access therapy. The key isn't just the tech, it's designing AI to feel safe, private, and human enough to be trusted. To scale this impact, we need stronger partnerships between AI builders and nonprofits, government, and public health organizations. These collaborations can shift AI from a "cool product" to a critical utility. AI for social good isn't about building flashier models. It's about designing context-aware, emotionally intelligent tools that fit into real lives, quietly, respectfully, and at scale.
Currently, many social good initiatives leveraging AI are bespoke, often siloed projects. A non-profit addressing food insecurity might build a predictive model, while another tackling disaster relief develops a different AI tool. This leads to: -Duplication of effort: Reinventing the wheel for similar underlying AI challenges. -Limited scalability: Solutions are hard to adapt and deploy elsewhere. -Lack of collective intelligence: The brilliant minds working on these problems aren't easily sharing insights or building upon each other's work. -Accessibility barriers: Smaller organizations without deep pockets or technical expertise struggle to even get started. Open-source AI agent platforms, designed with social good as their core mission, can shatter these barriers. Imagine a GitHub-like ecosystem, but specifically for modular, ethically-aligned AI agents focused on areas like public health, environmental sustainability, education, and humanitarian aid. But, we need to move beyond monolithic AI applications to discrete, interoperable AI agents. This means developing open standards for how these agents communicate, share data (securely and ethically), and perform specific tasks. Think of it like microservices for AI, allowing developers to pick and choose "blocks" of functionality. For example, a "data anonymization agent" could be universally applied across various social good projects without being rebuilt each time. Governments, philanthropic organizations, and even corporations should sponsor significant, recurring "AI for Social Good" challenge sprints. These aren't just hackathons; they're sustained, incentivized programs where teams develop and contribute AI agents to the open platform to solve specific, well-defined societal problems (e.g., "AI agent for early warning of emerging infectious diseases," "AI agent for optimizing last-mile aid delivery"). Bounties could be awarded for performance, ethical alignment, and reusability. With all this in mind, "Ethical AI Agent Registries" and Responsible AI Guardrails for social good, trust is paramount. The platform must include mechanisms for transparently documenting an AI agent's purpose, data sources, ethical considerations, and performance metrics. This fosters accountability and ensures that AI is used to empower, not exploit. The challenge is immense, but so is the potential reward.
To accelerate the use of artificial intelligence for social good, we need to move beyond isolated innovation and start funding frictionless infrastructure—public data ecosystems, accessible model APIs, and ethics-first frameworks that make it easier for people to build with purpose, not just profit. One of the biggest blockers I see when working with early-stage teams and scale-ups is not ambition, but access. The smartest minds in climate tech, healthcare, education—often outside the Silicon Valley bubble—are stuck wrestling with fragmented tooling, gated datasets, and compliance mazes. We need to bridge that gap with partnerships between regulators, researchers, and AI builders that remove technical and bureaucratic bottlenecks without diluting accountability. In my work designing go-to-market and growth systems across high-impact startups, I've seen firsthand how much faster innovation moves when small teams can plug into trusted AI infrastructure—think open-source libraries backed by academic rigour, or sandbox environments where social entrepreneurs can test sensitive use cases (like in aged care or disaster relief) without the fear of legal fallout. And here's the kicker: it's not just about building for social good—it's about with and by. Community-first product thinking, where co-design with end users is the default, not a phase. If we treat AI like a top-down silver bullet, we'll miss its real power: scaling local solutions globally, while still grounded in the nuance of lived experience. The tech is ready. What we need now is collective courage, shared standards, and systems thinking that sees AI not as an endpoint—but as an enabler.
One way I've seen real traction in using AI for social good is through partnerships with domain experts outside of tech. A few years ago, I worked with a nonprofit tackling food insecurity. They had deep community ties and data about supply and demand gaps, but no technical infrastructure. We built a lightweight recommendation system using off-the-shelf tools—nothing fancy—that helped them optimize food distribution routes based on need, not just geography. What made it effective wasn't the model; it was the tight collaboration between technologists and people who understood the human side of the problem. That experience convinced me that to accelerate AI's social impact, we need more embedded partnerships—teams where data scientists sit shoulder-to-shoulder with public health workers, educators, or housing advocates. Too often, AI projects for social good are driven from the lab out, instead of the ground up. We need to flip that. The best results I've seen came from aligning AI capabilities with grassroots knowledge and constraints from day one. If we invest in these kinds of field-level alliances, the technology will follow in the right direction.
A few years ago, my small AI consultancy partnered with a local mental-health nonprofit for what we called an "AI for Good Residency." We embedded one of our data scientists in their office for three months, working shoulder-to-shoulder with counselors and intake coordinators instead of building a solution in isolation. On day one, we expected to jump straight into sentiment-analysis models on their chat transcripts. By day three, we discovered a far bigger bottleneck: volunteers were manually anonymizing every message before it ever hit the case file, spending up to four hours each week just redacting names and locations. That meant real insights were delayed by days, and data-driven triage was impossible. So we shifted focus. Rather than fine-tuning the fanciest language model, we built a tiny Python tool—just 200 lines—that used a lightweight named-entity recognizer to strip personal details and auto-tag conversation themes (crisis, medication questions, scheduling). Within weeks, anonymization time fell by 80 percent, and our partners could safely centralize all transcripts for real-time trend analysis. That simple tool is now open-sourced on GitHub, and two other nonprofits have adopted it verbatim. Embedding our expert on the ground—listening, observing, then co-designing—was the catalyst. It meant we didn't solve the "wrong" problem, we solved the most painful one. And by open-sourcing our code, we turned a single three-month pilot into a community resource that any nonprofit can adapt without hiring an ML team. Action you can take: launch a micro-residency or fellowship with a mission-driven organization in your city. Even if all you contribute is four hours a week of developer time for two months, the domain experts will surface challenges you'd never uncover remotely—and the rapid feedback loop will deliver a tool that actually moves the needle. Package your work as an open-source starter kit, share it on social channels, and watch other groups pick it up. That blend of embedded collaboration and open distribution is, in my experience, the fastest way to turn AI from a buzzword into tangible social impact.
In my experience, one of the best ways to accelerate AI for social good is to build strong partnerships between local governments, tech companies, and community organizations. A few years ago, I worked with a municipal IT group that was struggling to deliver services quickly. They had outdated systems, overworked staff, and too many disconnected tools. We helped them connect with an AI provider focused on automation. The result wasn't just faster response times—it was greater trust from residents who felt heard and respected. That only happened because the right people were talking and working together toward the same goal. Another key strategy is making sure AI systems are designed with the community in mind. I've seen too many cases where tech gets rolled out without enough thought to who's using it. During a digital literacy training we ran for a senior center in Oakland, I saw firsthand how important it is to match the tools with people's needs. An AI tool that adjusts learning content to fit someone's pace? Game changer. But only if it's explained clearly and feels approachable. Social impact comes when people feel empowered, not left behind. If I had one piece of advice for those working in tech, it would be this: don't build in a vacuum. Listen. Test with real people. Partner with nonprofits who know the communities you're trying to help. AI has the power to close gaps—in education, healthcare, government—but it won't happen without honest feedback and shared goals. The tech is already impressive. What matters now is making sure it's used with care.
Forget building more AI that answers problems... build AI that asks better ones. The fastest way to drive change is to embed those questions in places where decision-makers live. I mean schools, local governments, HR systems, payroll software, tax prep tools. If every city utility dashboard asked, "Do you need rent relief this month?" and triggered an AI-powered routing system behind the scenes, now we are moving. That does more than any TED talk or hackathon. Fact is, the smartest tech does not need a shiny interface. It just needs to sit quietly where real decisions happen and make one of those decisions 2% smarter every day. Multiply that by 10 million users and you are talking serious impact... no headlines needed. Like I said, embed AI into the boring stuff. That is where real life happens. Make AI boring. Make it everywhere. That is how you build actual equity.
One way I've seen real traction in using AI for social good is through partnerships with domain experts outside of tech. A few years ago, I worked with a nonprofit tackling food insecurity. They had deep community ties and data about supply and demand gaps, but no technical infrastructure. We built a lightweight recommendation system using off-the-shelf tools that helped them optimize food distribution routes based on need. What made it effective wasn't the model; it was the tight collaboration between technologists and people who understood the human side of the problem. That experience convinced me that to accelerate AI's social impact, we need more solid partnerships—teams where data scientists sit shoulder to shoulder with public health workers, educators, or housing advocates. Too often, AI projects for social good are driven from the lab out, instead of the ground up. We need to flip that. The best results I've seen came from aligning AI capabilities with grassroots knowledge and constraints from day one. If we invest in these kinds of field-level alliances, the technology will follow in the right direction.
One of the most effective ways to accelerate AI for social good is to embed clear business objectives into collaborative projects between the private sector, NGOs, and academia. In my consulting work, I have consistently seen that technology adoption succeeds when it is rooted in measurable outcomes and operational alignment, not just idealism. For example, when advising global retailers and partners through ECDMA initiatives, we have found success in projects that link commercial incentives with societal benefits - like improving supply chain transparency to reduce food waste, or optimizing logistics to lower emissions. To drive meaningful progress, companies need to view social good initiatives as integral to their core strategy, not as side projects. That means allocating real budgets, assigning experienced teams, and measuring results with the same rigor as any core business operation. The most impactful AI applications often emerge where there is direct business value: automating accessibility features in digital commerce not only serves customers with disabilities but opens new markets; optimizing energy use in logistics both reduces costs and environmental impact. Partnerships are crucial, but they must be structured for accountability and speed. In our ECDMA Global Awards, we have seen that the strongest collaborations set shared KPIs at the outset and maintain regular executive engagement, not just technical coordination. Academic partners bring research depth, NGOs contribute on-the-ground insight, and businesses provide the operational engine to scale solutions. The underlying technology matters, but the real accelerator is disciplined execution. Projects flounder when stakeholders lose focus or treat AI as a speculative add-on. In practice, the organizations that accelerate AI for good are those that run pilots in real settings, iterate quickly, and then scale what works. This approach demystifies AI and ensures it actually solves pressing problems, rather than generating theoretical benefits. Ultimately, embedding AI for social good into the rhythm of business, backed by operational discipline and structured partnerships, is what moves the needle. It is not about chasing the latest trend, but about making AI a lever for strategic execution where commercial and societal objectives reinforce each other. This is how we have seen real, lasting impact take root.
An encouraging way of accelerating the use of AI for social good is the development of public-private data collaboratives—official partnerships whereby governments, nonprofits, and private technology companies share data, resources, and expertise to tackle pressing societal challenges (e.g., healthcare, climate, education, inequality). Why It Matters AI needs quality, diverse data to make smart predictions or decisions—but the best datasets are dispersed: - Governments have demographic and environmental data. - NGOs have grassroots, local expertise. - Private sector has real-time behavioral or geospatial data. Individually, each dataset is inadequate. Together, they make possible substantive models that can address actual problems. Example Strategy: AI for Public Health - Collaboration: A technology firm partners with a ministry of health and a global nonprofit. - Data shared: Anonymized hospital visits, outbreaks of disease, and mobility data are combined. - Impact: Predict the spread of diseases like dengue or COVID-19 and optimize medical supply distribution in real time. This was partly achieved by BlueDot and other initiatives in the COVID-19 pandemic, proving the model's efficacy. Key Enablers: - Ethical data-sharing agreements (GDPR-compliant, privacy-preserving) - Open-source AI models tuned for public-sector use cases - Local capacity building to allow communities to tailor and deploy AI responsibly - Policy incentives (grants, tax credits) for private sector to support public efforts
Multi-sector partnerships that bring together nonprofits, governments, technology companies, and local communities represent one of the most effective means of accelerating AI for social good. The convergence of technological, domain, and on-the-ground expertise enables these partnerships to co-create AI solutions that address real-world needs. Initiatives such as the Google AI Impact Challenge have demonstrated that keeping the doors open, so to speak, through open calls and funding empowers many organisations. It may have had little to no experience with AI to tackle challenges in health, education, and disaster response. Giving priority to equitable access, community engagement, and ethical consideration during AI design is essential for maximising impact and fostering solutions that are inclusive, transparent, and accountable. The implementation of all these strategies can lead to scalable, responsible AI innovations capable of addressing society's most pressing problems.
In my experience leading Zapiy, I've seen firsthand how AI's potential can transform industries—and with that potential comes a responsibility to channel it toward meaningful social impact. To accelerate AI for social good, I believe one key strategy is fostering cross-sector partnerships that bring together technology innovators, nonprofits, governments, and communities. AI on its own is a powerful tool, but without the right collaborations, it risks being disconnected from the real-world problems it's meant to solve. By creating partnerships where diverse stakeholders share expertise and resources, we can better identify pressing societal challenges—whether it's climate change, healthcare access, or education—and build AI solutions that are both effective and ethical. Another critical element is focusing on transparency and inclusivity throughout AI development. For AI to truly serve society, it must be designed with diverse perspectives, especially from those most impacted by these challenges. This means involving community leaders and domain experts early on, not as an afterthought but as active partners in shaping the technology. Technologically, I see great promise in open AI platforms and frameworks that lower the barrier for nonprofits and smaller organizations to leverage AI tools. Making AI accessible, affordable, and customizable empowers more groups to experiment and scale solutions tailored to their unique contexts. At Zapiy, we embrace the idea that AI's impact is maximized when paired with a human-centered approach—where data, algorithms, and empathy intersect. Accelerating AI for social good means prioritizing projects with clear societal benefits and measurable outcomes, rather than just novelty. In short, the path forward is collaboration, inclusivity, and democratization of AI technology. When we bring together the right mix of expertise, align incentives around genuine impact, and make the tools accessible to those on the frontlines of social challenges, AI can become a transformative force for a better future.