Free Courses Sale ends Soon, Get It Now


Different approaches to AI regulation

10th April, 2024

Different approaches to AI regulation

Disclaimer: Copyright infringement is not intended.

Context

  • Various efforts are being taken to formalize AI regulations at the global level will be critical to various sectors of governance in all other countries.

Need for AI regulations

  • Imepede SDGs:
    • Unethical and improper use of AI systems would impede the achievement of the 2030 Sustainable Development Goals (SDGs), weakening the ongoing efforts across all three dimensions — social, environmental, and economic.
  • Mitigate Risks:
    • AI can lead to bias, discrimination, privacy violations, and safety hazards. Regulations can help mitigate these risks.
  • Transparency and Explainability:
    • Most of the AI systems are opaque, making it difficult to understand their decision-making process. Regulations can promote transparency and explainability.
  • Impact on the workforce:
    • AI could potentially replace a large number of the workforce and the impact of this on the economy would be detrimental.
  • Accountability and liability:
    • Determination of responsibility and liability when AI systems cause harm or make erroneous decisions is also a considerable challenge.
  • Public Trust:
    • Clear regulations can build public trust in AI and encourage its responsible development.

Approaches of different countries to regulate AI

European Union

The EU AI Act takes a horizontal approach, which means it applies to all areas in which AI is utilized. It categorizes AI systems into four risk categories, each with related regulatory requirements:

  • Prohibited: Systems that breach fundamental rights or pose unacceptable dangers, such as social assessment, which generates biassed "risk profiles" of people based on their behavior.
  • High-risk: Systems that have a significant effect on people's lives and rights, such as those used for biometric identity, essential infrastructure, or educational, health, and law enforcement applications. These systems will have to meet strict safety, openness, and fairness standards. This could include human monitoring, strong security measures, and conformity evaluations to demonstrate that they satisfy EU standards.
  • Limited-risk: Systems that require user involvement, such as chatbots or AI-powered recommendation systems. These systems must be transparent about their use of AI and allow users to opt out of interacting with them if desired.
  • Minimal risk: Systems posing little or no risk, such as spam filters or smart appliances with limited AI capabilities.

Chinese approach

Chinese rules seek to balance AI with legal governance and are based on five key principles:

  1. Generative AI must adhere to the core socialist values of China and should not endanger national security or interests or promote discrimination and other violence or misinformation.
  2. Measures should be taken to prevent discrimination on ethnicity, belief, nationality, region, gender, age, occupation, and health resulting from generative AI.
  3. Generative AI must respect intellectual property rights and business ethics to avoid unfair competition and the sharing of business secrets.
  4. Generative AI must respect the rights of others and not endanger the physical or mental health of others.
  5. Measures must be taken to improve transparency, accuracy, and reliability.

UK’s approach to AI

  • The UK's final AI framework sets out five cross-sectoral principles for existing regulators to interpret and apply within their remits to guide responsible AI design, development, and use.

India’s approach to AI

 India is developing a comprehensive Digital India Framework that will include provisions for regulating AI. The framework aims to protect digital citizens and ensure the safe and trusted use of AI.

  • National AI program- India has established a National AI Programme to promote the efficient and responsible use of AI.
  • National Data Governance Framework Policy- India has implemented a National Data Governance Framework Policy to govern the collection, storage, and usage of data, including data used in AI systems.
  • Draft Digital India Act- The Ministry of Information Technology and Electronics (MeitY) is working on framing the draft Digital India Act, which will replace the existing IT Act. The new act will have a specific chapter dedicated to emerging technologies, particularly AI, and how to regulate them to protect users from harm.

Conclusion

  • Given the multitude of regulatory bodies involved, effective governance of the framework will be paramount. Both the Government and the private sector should play a pivotal role in ensuring effective coordination among regulators and mitigating any negative impact of AI. This will be crucial for providing businesses with the necessary regulatory clarity to adopt and scale their investment in AI, thereby bolstering India's competitive edge. Already 900 million strong internet population of India is going to touch 1.2 billion soon. Therefore AI regulation will be mandatory for prospects.

Source: https://epaper.thehindu.com/ccidist-ws/th/th_international/issues/78561/OPS/GTFCL7G07.1.png?cropFromPage=true

PRACTICE QUESTION

Q. "What are the different approaches of the countries towards regulating the rapid advancement of AI technologies, and what regulatory measures should be implemented to address these concerns effectively?" Examine. ( 250 Words)