top of page

Loading Post

James Bernard

Oct 18, 2024

6

min read

Harnessing Human-Centered AI for Societal Good: Insights from Seattle's Design and Impact Community

The Global Impact Collective's "Harnessing Human-Centered AI for Societal Good" event featured an engaging expert panel discussion.

Harnessing Human-Centered AI for Societal Good: Insights from Seattle's Design and Impact Community

A Malawian farmer uses the UlangiziAI app to better understand how to determine crop health. The app uses a WhatsApp front end to communicate with farmers in a format that is familiar to them.



In the rapidly evolving landscape of artificial intelligence, it's crucial to pause and consider how we can harness this powerful technology for the betterment of society.

  

Recently, the Global Impact Collective brought together members of Seattle's design and impact community to explore this topic. Our event, "Harnessing Human-Centered AI for Societal Good," featured an engaging panel discussion with experts from diverse backgrounds, offering valuable insights into the challenges and opportunities presented by AI. 



Our Distinguished Panel 


We were fortunate to host three remarkable experts: 

 

1. Ruth Kikin-Gil, Responsible AI Strategist at Microsoft 

2. Jennifer Dumas, Chief Counsel at Allen Institute for AI 

3. Greg Nelson, Chief Technology Officer of Opportunity International 

 

Their varied experiences and perspectives led to a rich, thought-provoking discussion that touched on several key themes. 



Key Discussion Themes 


Defining AI: Beyond the Buzzword 

One of the first challenges we face when discussing AI is defining what we mean by the term. As our panelists pointed out, AI isn't a monolithic entity but rather an umbrella term covering thousands of different technologies.  


This complexity underscores the nuances that should be considered when discussing AI's capabilities and implications. For instance, AI can be categorized into narrow AI, which is designed to perform a specific task (like voice recognition or image classification), and general AI, which aims to understand and reason across a wide range of contexts, though we are still far from achieving this level of sophistication. Moreover, the rapid progress in AI research and development has led to a proliferation of techniques, including machine learning, natural language processing, and neural networks, each with its own set of ethical considerations and operational challenges. 


  • The AI Landscape: According to a 2021 Stanford University report, AI publications have grown by 270% in the last five years, indicating the rapid expansion and diversification of the field and the proliferation of new technologies, as outlined above. 


  • Extractive vs. Generative AI  


    • Extractive AI focuses on analyzing and deriving insights from existing data, greatly reducing the risks. Examples include sentiment analysis tools and recommendation systems. Greg Nelson cited an example where Opportunity International is working on an AI-driven agronomy tool, called UlangiziAI, for smallholder farmers in Malawi. Rather than pull from broadly available online information, the model was built using specific data from the Ministry of Agriculture in Malawi, making the information more relevant for farmers in that country. “This way, we know that farmers are getting the best and most relevant data for their own circumstances,” he said. If you’d like more information on this tool, you can read recent articles on Devex and Bloomberg


    • Generative AI, on the other hand, creates new content based on learned patterns. It can be used as a creative prompt but shouldn’t be used as a definitive source of the truth. Generative AI includes technologies like GPT (Generative Pre-trained Transformer) models, which can generate human-like text, and GANs (Generative Adversarial Networks) used in creating realistic images. These tools, while impressive, may not have the depth for specific AI applications in impact and sustainability. 

 

  • Risk Assessment: The level of risk associated with AI applications varies greatly. For instance, an AI system used for movie recommendations carries far less risk than one used in healthcare diagnostics or criminal justice decision-making. 


  • AI as a Tool: Our panelists emphasized that generative AI should be viewed as a creative prompt rather than a source of factual information. A 2022 study by MIT researchers found that even state-of-the-art language models can generate factually incorrect information in up to 30% of cases, highlighting the importance of human oversight and verification. 



Navigating the Policy Gap 

A significant concern in the AI landscape is the lag between technological development and policy creation.  


  • Policy Development Timeline: Historical precedents suggest that comprehensive policy often lags technological innovation by several years. For example, it took nearly a decade after the widespread adoption of social media for the EU's General Data Protection Regulation (GDPR) to come into effect in 2018. 


  • Legal Liability Challenges: The lack of a comprehensive legal liability rubric for AI poses significant challenges. In the U.S., existing laws like the Communications Decency Act (Section 230) provide some protections for online platforms, but they weren't designed with AI in mind.  


  • Cultural Adaptation: As Jennifer Dumas pointed out, "We released a mature technology without the culture having caught up to that." This echoes concerns raised by scholars like Shoshana Zuboff in her book "The Age of Surveillance Capitalism," which argues that our social and economic systems are struggling to adapt to the rapid pace of technological change. 


  • Ethical Frameworks: The discussion brought to mind Isaac Asimov's Three Laws of Robotics, highlighting the need for ethical frameworks in AI development. While these laws were fictional, they've inspired real-world efforts like the IEEE's Ethically Aligned Design guidelines and the EU's Ethics Guidelines for Trustworthy AI. 



Ensuring Informed Consent in Diverse Contexts 

The concept of informed consent becomes increasingly complex in the context of AI, especially when considering global applications, and users from diverse backgrounds, some of whom may not even be familiar with major technological platforms like Google.  

 

For instance, in many developing countries, the lack of digital literacy can lead to users unknowingly consenting to data practices that exploit their information. Additionally, the concept of informed consent is not uniform across cultures, which complicates the ethical deployment of AI systems globally. Engaging local communities in the design and implementation of AI systems is crucial to ensuring that their voices and needs are prioritized. 

 

  • Digital Divide: According to the International Telecommunication Union, as of 2023, approximately 2.7 billion people worldwide still lack internet access. This digital divide raises questions about how to ensure informed consent in regions with limited exposure to technology. One way to overcome this, according to our panelists, is to use existing technologies, such as WhatsApp, as the front end for AI-generated tools on the backend. 


  • AI in Emerging Markets: There's a risk of perpetuating digital colonialism through AI implementation in emerging markets if practitioners don’t involve local communities in decision making.  


    Getting information on crop health using the UlangiziAI app in Malawi.

A 2021 report by Mozilla highlighted how AI systems trained primarily on data from Western countries often perform poorly when applied in different cultural contexts. Greg Nelson reinforced this notion by talking about the importance of using locally available datasets and local language to train models.  


  • Stakeholder Identification: Our panelists emphasized the importance of considering all stakeholders affected by an AI system, beyond just the immediate users. This aligns with the concept of "stakeholder theory" in business ethics, which argues that companies should create value for all stakeholders, not just shareholders. 


Building Trust in AI 

Trust is fundamental to the widespread adoption and ethical use of AI yet remains a significant barrier for broader adoption.  


  • Current Trust Levels: A 2022 global survey by Edelman found that only 37% of respondents trust AI companies to "do what is right." This underscores the point made by Ruth Kikin-Gil that "the technology hasn't earned the trust yet." 


  • Misinformation Risks: The potential for AI to generate and spread misinformation is a significant concern. A 2020 study published in Nature Machine Intelligence found that AI-generated text was rated as more credible than human-written text in certain contexts, highlighting the need for robust detection and verification systems. 


  • AI in Critical Decisions: As our panelists noted, when people's lives depend on AI, such as in healthcare or criminal justice, the margin for error must be extremely low. A 2016 ProPublica investigation into COMPAS, an AI system used in criminal risk assessment, found significant racial biases in its predictions, underscoring the importance of rigorous testing and oversight. 


  • Inclusive AI Development: Building trust with underrepresented groups who have historically been marginalized by technology is crucial. Initiatives like the AI for Good Foundation are working to ensure AI benefits all of humanity, not just a select few. 


AI in the Broader Context of Technology 

Finally, our discussion touched on how AI fits into the broader landscape of technological advancement: 

 

  • Over-reliance on Technology: The tendency to over-rely on new technologies, as exemplified by early GPS adoption, is a well-documented phenomenon in technology adoption studies. A 2022 study in the Journal of Experimental Psychology found that people tend to defer to AI recommendations even when they conflict with their own judgement. This means that developers, policymakers, and users must fully understand the limitations of AI and remain critical thinkers when using it. 


  • Amara's Law: Named after Roy Amara, this principle suggests we tend to overestimate technology's short-term effects while underestimating its long-term impact. This is evident in the history of AI itself - the field has experienced several "AI winters" where hype outpaced actual capabilities, followed by periods of significant but less publicized progress. 



Join the Conversation 


This event was part of an ongoing series aimed at professionals working at the intersection of human-centered design and social impact. Our next event, focusing on food waste, is scheduled for January 2025. 

 

To stay informed about future events, follow the Global Impact Collective on LinkedIn. If you're interested in learning more about our work or discussing potential collaborations, visit our website or reach out to us at info@globalimpactcollective.net

 

As AI continues to shape our world, it's crucial that we engage in these discussions and work together to ensure that this powerful technology is harnessed for the greater good. We invite you to be part of this important conversation. 



Design Strategy

Recent Posts

With Roots at Microsoft, the Global Impact Collective’s Journey is Shared With Company Alumni Around the World
The Inner Workings of Empathy
3 Keys to Successful Corporate Sustainability Partnerships with NGOs
Writer's pictureJames Bernard

Five Lessons for social sector organizations that want to partner with the private sector

If you work at a social-sector organization, it can be intimidating and sometimes difficult to work with private sector companies. One key to successful partnerships is understanding how companies work. My last article was understanding international development organizations, how they work, and why you might (or might not) want to partner with them to solve business challenges.  


Now, I want to turn the tables and look at partnership from the corporate perspective. If you work at a social impact organization (donors, multilaterals, NGOs, etc.) what should you keep in mind when designing partnerships or programs with corporate partners? 


Quick story. When I led a teacher training initiative at Microsoft, I often spoke at conferences attended by social sector representatives. I could practically see people’s eyes turning into dollar signs when they heard where I worked. The stark reality was that our entire budget for a worldwide program was tens of millions of dollars, which left little room for transactional donations to organizations. The vast majority of the $250M program went to the field to help train teachers and school leaders. To drive global initiatives, we had to rely on strategic engagements where we could co-design a partnership that would be mutually beneficial.  


Several years ago, I met several representatives from a large, UK-based educational NGO. In initial discussions we discovered that our organizations broadly agreed that the best way for students to learn 21st century skills (creativity, collaboration, communications, etc.) was to train teachers on using technology to support innovative teaching and learning practices. We arranged for a second meeting at Microsoft’s HQ near Seattle to hash out a partnership.  


The London-based team arrived, ready to ask us for millions of dollars to sponsor several of their programs (I learned this later). I started the meeting by saying we wouldn’t talk about money. We’d instead focus on the challenges each organization faced, quantifying the assets we could bring into a partnership, and only then determine if a partnership made sense.  


I could tell they were shocked. They told me later they’d expected to walk away with a check. This, of course, led to the inevitable “valley of despair” that seems to be a part of every partnership negotiation. A day later, we were joined by one of their colleagues from Kenya. He immediately helped us understand the perspective from the field and build a program that would address education reform from a policy perspective. We ended up building a successful multi-year partnership to reach thousands of education policymakers. So, the discussion became more about what we could do, rather than how we would fund it. And the funding indeed followed the program. 


In the years since that story, I’ve worked with dozens of multinational companies to design successful partnerships. Here are four key lessons I’ve learned about working with companies: 


Lesson 1: Think Transformation, Not Transaction 

The first lesson is to design partnerships that are transformational, rather than transactional. Don’t assume that your corporate partners necessarily have budget to write a check to support your organization or your existing programs. Yes, there are corporate foundations that have money and will give grants to non-profit organizations, and there is value in this. But if you want to develop a truly transformational partnership, you should get smart about how your organization can solve the business issues a company may be facing.  


Can you help them reach new customers? Do you have experience and networks of farmers that can help drive sustainability in a supply chain? Do you work on labor or land rights issues that might be relevant? Do you have a unique way of reaching the last mile of consumers? Then, think about a company’s non-financial assets and how they can be leveraged to improve the work you do, whether it’s their technical expertise, reach, channels, or scale. In other words, try to think more like a business.  


Lesson 2: Companies Are Not Monolithic 

I’ve heard many people at non-profit organizations express skepticism about the ambitions and goals of prospective corporate partners. It’s true that many companies have aspects of their business or past issues that are less than ideal (labor issues, unhealthy products, murky supply chains, etc.), but that should not necessarily negate the possibilities to partner with that company. Think about a company that makes products that might be unhealthy if consumed in large quantities. At the same time, that company may be working to create more sustainable supply chains, improving labor practices in its factories, and working to guarantee better opportunities for women. In other words, both things can be true: a company can have issues and can also be doing the right thing.   


Of course, you should determine where your organization wants to draw the line on which companies to work with. Decide what aspects of a company’s business can extend your mission and which would be detrimental to it. For example, in a previous role, we were approached by a foundation that was funded by the tobacco industry. We were crystal clear that we had no desire to push the agenda or grow the revenue of tobacco companies. However, as we got to know the organization better, we recognized that there were in fact areas of alignment. The team that approached us was working with smallholder tobacco farmers in some of the least-developed countries on earth. These farmers’ livelihoods would be decimated by decreasing demand for tobacco worldwide, and they needed to explore alternative commodities and markets. After much consideration, analysis, and internal debate, we decided to work with the organization because working to improve the livelihoods of smallholder farmers fit squarely in our mission.  


Lesson 3: Understand the Corporate Structure  

It’s important to understand the corporate structure of a prospective partner, and the motivations and incentives of groups in the company. In its most basic form, a company sources raw materials from factories, farms or mines (or develop intellectual property); develops products to meet a customer need; and sells those products to customers (consumers or businesses). The goal is revenue growth and (usually) shareholder value. Within this basic structure there’s a ton of nuance, and the dynamics in any big organization are inevitably complex and unique.   For example, you might work with a corporate sustainability team that is trying to improve a broad range of practices across a wide variety of supply chains and issue areas. They may be responsible for greenhouse gas emission reductions, reducing poor labor practices, preventing deforestation, or a host of other issues. To achieve their goals, people on the team will need to partner with procurement, legal and government affairs, communications, marketing/brand, product design/packaging, manufacturing and operations, and treasury teams. They need to manage projects, partnerships and programs with field teams that might have different motivations (e.g., getting the best price for goods or meeting quarterly sales targets) and incentive structures. Any corporate manager will tell you that a large part of their job is managing expectations and driving influence across a matrixed organization.   Before moving forward with any partnership discussions, spend time mapping the internal groups that your counterpart may be working with, and how they may contribute to (or block) a partnership, program, or product development. This can be done through external research or by simply asking your contacts who they work with and what the opportunities or challenges might be. These internal stakeholders should be part of any co-creation process.  


Lesson 4: Recognize HQ vs. Field Dynamics 

The opportunity to partner with companies can come from many different areas or geographies, and each company operates differently when it comes to the headquarters vs. the field. In some cases, field teams are fully empowered and have the budget to develop and execute partnerships. In other cases, field teams may be completely dependent on corporate support and may in fact have their objectives and strategies dictated to them by HQ. These dynamics will dictate the power dynamics at play and can influence the success of your partnership. 


In any case, if you are developing a partnership that will operate at the field level, you and your corporate partner will likely need to get buy-in and commitment from field teams. These teams will often be responsible for sourcing or for selling, depending on what part of the business you are working with, and their goals may not be well aligned with the longer-term objectives of a strategic partnership. This is again when it becomes super important to understand the business objectives of your partner. Think about what’s in it for the field reps of your partner: For example, how would improving irrigation practices at the farm level increase yield and quality, two things that a field agronomist might care about? Will a program focused on small and medium enterprises in villages help a field salesperson achieve their targets for the year?  


Lesson 5: Recognize Your Value to the Relationship  

It’s important to recognize – and help your partners recognize – that you bring a lot to the table in any partnership, even if your organization is smaller or less well known. Make sure that you clearly articulate the value of your organizational assets, your mission, and your ability to execute.   Recognizing your value also means that you don’t need to acquiesce to urgent timelines around an announcement deadline, event, or the need to sign an MOU. Some companies move quickly with an attitude of “getting it done and worrying about the details later.” I’ve seen too many partnerships fail because something that looked good on paper has no real meat on the bones. While there might be very good reasons to work toward a deadline, it’s imperative that you and your partner take the time to think through partnership governance, roles and responsibilities, and success metrics. Don’t be afraid to push on this, even if you are working with a well-known international brand; it will pay off in the long term.  


Of course, every company is different, and every partnership has unique dynamics. Building a successful partnership is first and foremost dependent on establishing strong relationships with the people you’ll work with. Once you establish trust, define common objectives and build a mutual understanding of what you want to achieve together, I hope you can use the lessons above to build great partnerships. 

bottom of page