Harnessing Human-Centered AI for Societal Good

In the rapidly evolving landscape of artificial intelligence, it's crucial to pause and consider how we can harness this powerful technology for the betterment of society.

October 18, 2024

October 18, 2024

October 18, 2024

Tactile Inc.

Tactile Inc.

Tactile Inc.

In the rapidly evolving landscape of artificial intelligence, it's crucial to pause and consider how we can harness this powerful technology for the betterment of society.

Recently, the Global Impact Collective brought together members of Seattle's design and impact community to explore this topic. Our event, "Harnessing Human-Centered AI for Societal Good," featured an engaging panel discussion with experts from diverse backgrounds, offering valuable insights into the challenges and opportunities presented by AI. 


Our Distinguished Panel 

  1. Ruth Kikin-Gil, Responsible AI Strategist at Microsoft 

  2. Jennifer Dumas, Chief Counsel at Allen Institute for AI 

  3. Greg Nelson, Chief Technology Officer of Opportunity International 

Their varied experiences and perspectives led to a rich, thought-provoking discussion that touched on several key themes. 

Close-up of a person’s hands holding a smartphone and paperwork, seated outside on a blue plastic chair, representing human-centered AI for societal good
Close-up of a person’s hands holding a smartphone and paperwork, seated outside on a blue plastic chair, representing human-centered AI for societal good
Close-up of a person’s hands holding a smartphone and paperwork, seated outside on a blue plastic chair, representing human-centered AI for societal good
Close-up of a person’s hands holding a smartphone and paperwork, seated outside on a blue plastic chair, representing human-centered AI for societal good

Defining AI: Beyond the Buzzword

One of the first challenges we face when discussing AI is defining what we mean by the term. As our panelists pointed out, AI isn't a monolithic entity but rather an umbrella term covering thousands of different technologies.  

This complexity underscores the nuances that should be considered when discussing AI's capabilities and implications. For instance, AI can be categorized into narrow AI, which is designed to perform a specific task (like voice recognition or image classification), and general AI, which aims to understand and reason across a wide range of contexts, though we are still far from achieving this level of sophistication. Moreover, the rapid progress in AI research and development has led to a proliferation of techniques, including machine learning, natural language processing, and neural networks, each with its own set of ethical considerations and operational challenges. 

The AI Landscape

According to a 2021 Stanford University report, AI publications have grown by 270% in the last five years, indicating the rapid expansion and diversification of the field and the proliferation of new technologies, as outlined above. 


Extractive AI

Focuses on analyzing and deriving insights from existing data, greatly reducing the risks. Examples include sentiment analysis tools and recommendation systems. Greg Nelson cited an example where Opportunity International is working on an AI-driven agronomy tool, called UlangiziAI, for smallholder farmers in Malawi. Rather than pull from broadly available online information, the model was built using specific data from the Ministry of Agriculture in Malawi, making the information more relevant for farmers in that country. “This way, we know that farmers are getting the best and most relevant data for their own circumstances,” he said. If you’d like more information on this tool, you can read recent articles on Devex and Bloomberg


Generative AI

Creates new content based on learned patterns. It can be used as a creative prompt but shouldn’t be used as a definitive source of the truth. Generative AI includes technologies like GPT (Generative Pre-trained Transformer) models, which can generate human-like text, and GANs (Generative Adversarial Networks) used in creating realistic images. These tools, while impressive, may not have the depth for specific AI applications in impact and sustainability. 

 

Risk Assessment

The level of risk associated with AI applications varies greatly. For instance, an AI system used for movie recommendations carries far less risk than one used in healthcare diagnostics or criminal justice decision-making. 


AI as a Tool

Our panelists emphasized that generative AI should be viewed as a creative prompt rather than a source of factual information. A 2022 study by MIT researchers found that even state-of-the-art language models can generate factually incorrect information in up to 30% of cases, highlighting the importance of human oversight and verification. 

Navigating the Policy Gap

A significant concern in the AI landscape is the lag between technological development and policy creation.  


Policy Development Timeline

Historical precedents suggest that comprehensive policy often lags technological innovation by several years. For example, it took nearly a decade after the widespread adoption of social media for the EU's General Data Protection Regulation (GDPR) to come into effect in 2018. 


Legal Liability Challenges

The lack of a comprehensive legal liability rubric for AI poses significant challenges. In the U.S., existing laws like the Communications Decency Act (Section 230) provide some protections for online platforms, but they weren't designed with AI in mind.  


Cultural Adaptation

As Jennifer Dumas pointed out, "We released a mature technology without the culture having caught up to that." This echoes concerns raised by scholars like Shoshana Zuboff in her book "The Age of Surveillance Capitalism," which argues that our social and economic systems are struggling to adapt to the rapid pace of technological change. 


Ethical Frameworks

The discussion brought to mind Isaac Asimov's Three Laws of Robotics, highlighting the need for ethical frameworks in AI development. While these laws were fictional, they've inspired real-world efforts like the IEEE's Ethically Aligned Design guidelines and the EU's Ethics Guidelines for Trustworthy AI. 

Ensuring Informed Consent in Diverse Contexts

The concept of informed consent becomes increasingly complex in the context of AI, especially when considering global applications, and users from diverse backgrounds, some of whom may not even be familiar with major technological platforms like Google.  

For instance, in many developing countries, the lack of digital literacy can lead to users unknowingly consenting to data practices that exploit their information. Additionally, the concept of informed consent is not uniform across cultures, which complicates the ethical deployment of AI systems globally. Engaging local communities in the design and implementation of AI systems is crucial to ensuring that their voices and needs are prioritized. 

 

Digital Divide

According to the International Telecommunication Union, as of 2023, approximately 2.7 billion people worldwide still lack internet access. This digital divide raises questions about how to ensure informed consent in regions with limited exposure to technology. One way to overcome this, according to our panelists, is to use existing technologies, such as WhatsApp, as the front end for AI-generated tools on the backend. 

AI in Emerging Markets

There's a risk of perpetuating digital colonialism through AI implementation in emerging markets if practitioners don’t involve local communities in decision making.  

A 2021 report by Mozilla highlighted how AI systems trained primarily on data from Western countries often perform poorly when applied in different cultural contexts. Greg Nelson reinforced this notion by talking about the importance of using locally available datasets and local language to train models.  

Stakeholder Identification

Our panelists emphasized the importance of considering all stakeholders affected by an AI system, beyond just the immediate users. This aligns with the concept of "stakeholder theory" in business ethics, which argues that companies should create value for all stakeholders, not just shareholders.