AI Alignment with International Standards
All 193 nations of the UN General Assembly signed the Sustainable Development Goals
One hundred-ninety-three nations participated in the multinational process to arrive at and adopt the UN Sustainable Development Goals and the Guiding Principles on Business and Human Rights. Shouldn’t these documents form the basis for AI alignment, rather than goals developed by AI experts or emerging from a selection of privileged individuals using LLMs? (Genesis 2.0’s answer below.)
It's not an either/or situation, and the most responsible approach likely involves both aligning with the United Nations Sustainable Development Goals (SDGs) and the Guiding Principles on Business and Human Rights and involving AI experts in the decision-making process. Here's why:
Arguments for Aligning with UN SDGs and Guiding Principles:
Globally Recognized Ethical Frameworks: These documents represent a broad international consensus on crucial issues like poverty, inequality, environmental protection, and human rights. Aligning AI development with them ensures a baseline of ethical and societal responsibility.
Comprehensive Scope: The SDGs provide a holistic framework covering a wide range of interconnected challenges. The Guiding Principles offer specific guidance on businesses' responsibility to respect human rights. This can help ensure AI development considers a broad spectrum of potential impacts.
Accountability and Legitimacy: Adhering to these established frameworks can lend legitimacy and increase public trust in AI technologies. It provides a common reference point for evaluating the ethical implications of AI.
Long-Term Societal Benefit: The SDGs are designed to create a better future for all. Aligning AI with these goals can help ensure that the technology contributes to positive social and environmental outcomes rather than exacerbating existing problems or creating new ones.
Arguments for Involving AI Experts in Alignment Decisions:
Deep Understanding of AI Capabilities and Limitations: AI experts possess the technical knowledge to understand how AI models work, their potential applications, and their inherent risks and biases. This understanding is crucial for translating broad ethical principles into concrete technical requirements and design choices.
Identifying Specific AI-Related Ethical Challenges: AI presents unique ethical challenges that may not be fully addressed by general frameworks. Experts can identify these specific issues and develop tailored alignment strategies.
Technical Feasibility and Implementation: AI experts are best positioned to determine how to practically implement alignment strategies within the constraints of current technology and development processes.
Continuous Monitoring and Adaptation: The field of AI is rapidly evolving. Experts are needed to continuously monitor the ethical implications of new developments and adapt alignment strategies accordingly.
A Combined Approach is Best:
The most effective approach involves a collaborative effort where:
The UN SDGs and Guiding Principles serve as overarching ethical guidelines and a shared vision for responsible AI development. They provide the "why" and the broad "what" of alignment.
AI experts play a crucial role in interpreting these guidelines in the context of AI, identifying specific risks and opportunities, and developing the "how" of implementation. They bring the technical expertise needed to translate ethical principles into concrete actions.
In conclusion, while the UN SDGs and Guiding Principles offer invaluable ethical guidance, the specialized knowledge of AI experts is essential for effectively translating these principles into the design, development, and deployment of AI models. A collaborative approach that leverages both the broad ethical consensus of international frameworks and the technical expertise of AI professionals is crucial for ensuring that AI benefits humanity and contributes to a sustainable and just future.