top of page

Loading Post

James Bernard

Oct 18, 2024

6

min read

Harnessing Human-Centered AI for Societal Good: Insights from Seattle's Design and Impact Community

The Global Impact Collective's "Harnessing Human-Centered AI for Societal Good" event featured an engaging expert panel discussion.

Harnessing Human-Centered AI for Societal Good: Insights from Seattle's Design and Impact Community

A Malawian farmer uses the UlangiziAI app to better understand how to determine crop health. The app uses a WhatsApp front end to communicate with farmers in a format that is familiar to them.



In the rapidly evolving landscape of artificial intelligence, it's crucial to pause and consider how we can harness this powerful technology for the betterment of society.

  

Recently, the Global Impact Collective brought together members of Seattle's design and impact community to explore this topic. Our event, "Harnessing Human-Centered AI for Societal Good," featured an engaging panel discussion with experts from diverse backgrounds, offering valuable insights into the challenges and opportunities presented by AI. 



Our Distinguished Panel 


We were fortunate to host three remarkable experts: 

 

1. Ruth Kikin-Gil, Responsible AI Strategist at Microsoft 

2. Jennifer Dumas, Chief Counsel at Allen Institute for AI 

3. Greg Nelson, Chief Technology Officer of Opportunity International 

 

Their varied experiences and perspectives led to a rich, thought-provoking discussion that touched on several key themes. 



Key Discussion Themes 


Defining AI: Beyond the Buzzword 

One of the first challenges we face when discussing AI is defining what we mean by the term. As our panelists pointed out, AI isn't a monolithic entity but rather an umbrella term covering thousands of different technologies.  


This complexity underscores the nuances that should be considered when discussing AI's capabilities and implications. For instance, AI can be categorized into narrow AI, which is designed to perform a specific task (like voice recognition or image classification), and general AI, which aims to understand and reason across a wide range of contexts, though we are still far from achieving this level of sophistication. Moreover, the rapid progress in AI research and development has led to a proliferation of techniques, including machine learning, natural language processing, and neural networks, each with its own set of ethical considerations and operational challenges. 


  • The AI Landscape: According to a 2021 Stanford University report, AI publications have grown by 270% in the last five years, indicating the rapid expansion and diversification of the field and the proliferation of new technologies, as outlined above. 


  • Extractive vs. Generative AI  


    • Extractive AI focuses on analyzing and deriving insights from existing data, greatly reducing the risks. Examples include sentiment analysis tools and recommendation systems. Greg Nelson cited an example where Opportunity International is working on an AI-driven agronomy tool, called UlangiziAI, for smallholder farmers in Malawi. Rather than pull from broadly available online information, the model was built using specific data from the Ministry of Agriculture in Malawi, making the information more relevant for farmers in that country. “This way, we know that farmers are getting the best and most relevant data for their own circumstances,” he said. If you’d like more information on this tool, you can read recent articles on Devex and Bloomberg


    • Generative AI, on the other hand, creates new content based on learned patterns. It can be used as a creative prompt but shouldn’t be used as a definitive source of the truth. Generative AI includes technologies like GPT (Generative Pre-trained Transformer) models, which can generate human-like text, and GANs (Generative Adversarial Networks) used in creating realistic images. These tools, while impressive, may not have the depth for specific AI applications in impact and sustainability. 

 

  • Risk Assessment: The level of risk associated with AI applications varies greatly. For instance, an AI system used for movie recommendations carries far less risk than one used in healthcare diagnostics or criminal justice decision-making. 


  • AI as a Tool: Our panelists emphasized that generative AI should be viewed as a creative prompt rather than a source of factual information. A 2022 study by MIT researchers found that even state-of-the-art language models can generate factually incorrect information in up to 30% of cases, highlighting the importance of human oversight and verification. 



Navigating the Policy Gap 

A significant concern in the AI landscape is the lag between technological development and policy creation.  


  • Policy Development Timeline: Historical precedents suggest that comprehensive policy often lags technological innovation by several years. For example, it took nearly a decade after the widespread adoption of social media for the EU's General Data Protection Regulation (GDPR) to come into effect in 2018. 


  • Legal Liability Challenges: The lack of a comprehensive legal liability rubric for AI poses significant challenges. In the U.S., existing laws like the Communications Decency Act (Section 230) provide some protections for online platforms, but they weren't designed with AI in mind.  


  • Cultural Adaptation: As Jennifer Dumas pointed out, "We released a mature technology without the culture having caught up to that." This echoes concerns raised by scholars like Shoshana Zuboff in her book "The Age of Surveillance Capitalism," which argues that our social and economic systems are struggling to adapt to the rapid pace of technological change. 


  • Ethical Frameworks: The discussion brought to mind Isaac Asimov's Three Laws of Robotics, highlighting the need for ethical frameworks in AI development. While these laws were fictional, they've inspired real-world efforts like the IEEE's Ethically Aligned Design guidelines and the EU's Ethics Guidelines for Trustworthy AI. 



Ensuring Informed Consent in Diverse Contexts 

The concept of informed consent becomes increasingly complex in the context of AI, especially when considering global applications, and users from diverse backgrounds, some of whom may not even be familiar with major technological platforms like Google.  

 

For instance, in many developing countries, the lack of digital literacy can lead to users unknowingly consenting to data practices that exploit their information. Additionally, the concept of informed consent is not uniform across cultures, which complicates the ethical deployment of AI systems globally. Engaging local communities in the design and implementation of AI systems is crucial to ensuring that their voices and needs are prioritized. 

 

  • Digital Divide: According to the International Telecommunication Union, as of 2023, approximately 2.7 billion people worldwide still lack internet access. This digital divide raises questions about how to ensure informed consent in regions with limited exposure to technology. One way to overcome this, according to our panelists, is to use existing technologies, such as WhatsApp, as the front end for AI-generated tools on the backend. 


  • AI in Emerging Markets: There's a risk of perpetuating digital colonialism through AI implementation in emerging markets if practitioners don’t involve local communities in decision making.  



A 2021 report by Mozilla highlighted how AI systems trained primarily on data from Western countries often perform poorly when applied in different cultural contexts. Greg Nelson reinforced this notion by talking about the importance of using locally available datasets and local language to train models.  


  • Stakeholder Identification: Our panelists emphasized the importance of considering all stakeholders affected by an AI system, beyond just the immediate users. This aligns with the concept of "stakeholder theory" in business ethics, which argues that companies should create value for all stakeholders, not just shareholders. 


Building Trust in AI 

Trust is fundamental to the widespread adoption and ethical use of AI yet remains a significant barrier for broader adoption.  


  • Current Trust Levels: A 2022 global survey by Edelman found that only 37% of respondents trust AI companies to "do what is right." This underscores the point made by Ruth Kikin-Gil that "the technology hasn't earned the trust yet." 


  • Misinformation Risks: The potential for AI to generate and spread misinformation is a significant concern. A 2020 study published in Nature Machine Intelligence found that AI-generated text was rated as more credible than human-written text in certain contexts, highlighting the need for robust detection and verification systems. 


  • AI in Critical Decisions: As our panelists noted, when people's lives depend on AI, such as in healthcare or criminal justice, the margin for error must be extremely low. A 2016 ProPublica investigation into COMPAS, an AI system used in criminal risk assessment, found significant racial biases in its predictions, underscoring the importance of rigorous testing and oversight. 


  • Inclusive AI Development: Building trust with underrepresented groups who have historically been marginalized by technology is crucial. Initiatives like the AI for Good Foundation are working to ensure AI benefits all of humanity, not just a select few. 


AI in the Broader Context of Technology 

Finally, our discussion touched on how AI fits into the broader landscape of technological advancement: 

 

  • Over-reliance on Technology: The tendency to over-rely on new technologies, as exemplified by early GPS adoption, is a well-documented phenomenon in technology adoption studies. A 2022 study in the Journal of Experimental Psychology found that people tend to defer to AI recommendations even when they conflict with their own judgement. This means that developers, policymakers, and users must fully understand the limitations of AI and remain critical thinkers when using it. 


  • Amara's Law: Named after Roy Amara, this principle suggests we tend to overestimate technology's short-term effects while underestimating its long-term impact. This is evident in the history of AI itself - the field has experienced several "AI winters" where hype outpaced actual capabilities, followed by periods of significant but less publicized progress. 



Join the Conversation 


This event was part of an ongoing series aimed at professionals working at the intersection of human-centered design and social impact. Our next event, focusing on food waste, is scheduled for January 2025. 

 

To stay informed about future events, follow the Global Impact Collective on LinkedIn. If you're interested in learning more about our work or discussing potential collaborations, visit our website or reach out to us at info@globalimpactcollective.net

 

As AI continues to shape our world, it's crucial that we engage in these discussions and work together to ensure that this powerful technology is harnessed for the greater good. We invite you to be part of this important conversation. 



Design Strategy

Recent Posts

With Roots at Microsoft, the Global Impact Collective’s Journey is Shared With Company Alumni Around the World
The Inner Workings of Empathy
3 Keys to Successful Corporate Sustainability Partnerships with NGOs
Writer's pictureTom Bouchard

Design Swarms: A Revolution in Collaborative Problem-Solving

The Global Impact Collective is proud to be partnering with Authentic Design to bring the Design Swarms® workshop and process to help our clients solve the most pressing issues facing people and the planet.


The concept of Design Swarms was conceived by Surya Vanka of Authentic Design in Seattle in 2015 as a response to the growing complexity of global challenges and the need for more inclusive, collaborative problem-solving methods. Surya is now also a Founding Advisor of the Global Impact Collective.


How can Design Swarms help organizations solve tricky issues through creative thinking? Below are three examples.

Designers using Post-it notes
A Design Swarm in progress

Addressing Gender-Based Violence in Sierra Leone


In 2022, a group of young boys and girls from Sierra Leone won The Frontier Design Prize, one of the most prestigious design prizes in the world, for their innovative solutions to address gender-based violence. The group, comprising twenty-eight 15-year-old students from the Rising Academy Network based in Freetown, employed a sophisticated design thinking approach despite having absolutely no prior experience in design. In just two days in a workshop setting, they created four innovative solutions, comparable to those developed by highly trained, world-class design teams.


One student team focused on the challenges faced by girls with albinism, who endure discrimination stemming from tribal beliefs, including accusations of witchcraft which expose them to constant risks of violence, including ritual attacks, and often resort to isolation. To combat this, the team designed ‘The Ghost App,’ an innovative social media platform that allows for individual expression without revealing the user's gender, physical features, or skin color.


Addressing Ohio’s Opioid Epidemic


In a workshop at the Ohio State University, a diverse group including forty medical experts, academics, first responders, and students gathered, envisioning a response to the opioid crisis where, on average, someone in Ohio died of an opioid overdose every 11 minutes. By using a design thinking approach, they first gained empathy for those afflicted by addiction. One team's insight was into the phenomenon of 'accidental addicts' – individuals who become addicted after experimenting just once or twice with surplus medication from legitimate prescriptions. They proposed a cheap, simple, and innovative solution called 'Prime Rx,' ensuring only four pills are delivered daily by Amazon, requiring authentication for receipt, thereby eliminating dangerous surplus. Another team developed a video game that graphically demonstrates to potential addicts the devastating personal consequences of opioid addiction.


Reducing Homelessness Among Women in Seattle


In Seattle, a group of women with no background in design employed a design thinking approach to develop innovative and impactful methods to address the systemic issue of homelessness among women. This intractable problem was highlighted by the fact that each night, around five hundred homeless women were on the streets of Seattle, and forty-five homeless individuals had died, leading the mayor to declare a homelessness epidemic and call it a civil emergency. Representatives from Mary’s Place, a women’s shelter, including administrators and formerly homeless women, collaborated with professionals to develop solutions to real problems they had encountered based on their own lived experience of homelessness.


A significant insight emerged during the workshop: when a woman becomes homeless, she not only loses her home but often her ability to prove her identity, especially if forced to flee abruptly from domestic abuse without identity documents. This insight led to the creation of ‘Identity Haven,’ a custom, cloud-based solution for storing identity documents with easy access during a crisis.


Design Swarms helped these ordinary people -- most of whom did not even know a discipline called design existed -- develop innovative solutions using such sophisticated and systematic design thinking methods.


Traditional design methods often fall short in addressing multifaceted problems that require a diverse range of perspectives and expertise. In developing Design Swarms, Surya drew on his years as a design leader at Microsoft and a professor of design at the University of Illinois at Urbana-Champaign. He created a novel method that brings together two concepts: Design Thinking and Swarm Creativity. As he developed this method over many years, he was inspired by observing the behaviors of small teams that moved fast and produced results within organizations. These teams didn’t get bogged down by bureaucracies, but rather displayed the coordinated, collective behavior of swarms observed in nature -- such as flocks of birds, ant colonies, or schools of fish – where the collective led to a greater good.


What are Design Swarms?


Design Swarms are a unique methodology that combines design thinking principles with the agility and collaborative dynamism of a 'swarm.' This approach focuses on harnessing the collective intelligence and creativity of diverse groups to tackle challenging problems. It involves intense, focused workshops where participants, regardless of their design background, collaborate to brainstorm, ideate, and develop solutions.


The essence of Design Swarms lies in its structured yet flexible approach, laid out in four primary phases.


A Design Swarm begins with the group collaboratively defining the problem through deep empathy, where participants from diverse backgrounds contribute their unique perspectives. This ensures a comprehensive understanding of the issue and sets the stage for the creative process.


In the ideation phase, the swarm mentality shines. Participants brainstorm, leveraging and building upon each other's ideas. This phase highlights the power of collective intelligence, with ideas evolving and maturing through the group's shared creativity.


Next is the prototyping phase, where ideas become tangible concepts. This rapid, iterative prototyping includes constant feedback loops to keep solutions aligned with the problem and user needs.


The testing and refinement phase allows the swarm to evaluate prototypes, soliciting feedback from a broader audience and iteratively refining the solution. This phase is crucial to ensure the final product or solution is innovative, practical, and user centric.


At its core, Design Swarms employ dozens of proprietary process maps developed by Surya over ten years. These maps guide participants through a collaborative journey, visually representing the design thinking process used by expert designers. They enable participants unfamiliar with design to systematically explore the problem space before diving into creative solution development.


The Design Swarm Toolkit, along with trained facilitators, helps orchestrate the right ‘swarm behaviors’ across multiple small teams, fostering rapid learning from each other. This structured approach ensures inclusivity of all voices while maintaining focus and efficiency. The methodology is characterized by its emphasis on extreme collaboration and agility, creating an environment where rapid ideation and iterative feedback are the norm.


The Impact of Design Swarms


Design Swarms have had widespread impact across dozens of countries and hundreds of organizations. These swarms have been applied globally in a multitude of contexts, ranging from corporate problem-solving at large, multinational companies to addressing social issues in extremely low-resource communities. The versatility of this method lies in its ability to adapt to different problems and incorporate inputs from a wide range of participants.


In the business world, it has led to the creation of innovative products and services that are more aligned with user needs and market demands. In the social realm, it has enabled the development of practical, impactful solutions to some of the most pressing challenges by empowering communities to utilize their collective creative potential. In educational settings, Design Swarms has been instrumental in teaching students the value of collaborative problem-solving and design thinking.


Design Swarms represent a paradigm shift in collaborative problem-solving. The unique approach, combining design thinking with swarm creativity, unlocks the potential of collective intelligence – no matter who is in the room. This leads to innovative solutions that are both practical and impactful. It is a powerful tool that leverages the best of human creativity and collective effort to create a more innovative and solution-oriented world.


If you would like to unleash the creative potential of your organization with a Design Swarm by harnessing your creativity for the greater good, please get in touch!

bottom of page