EESC calls for robust values framework addressing AI risks – Euractiv

Share This Post


The European Union has made significant strides in regulating artificial intelligence to ensure trustworthiness and ethical alignment. Critics argue the current framework is still insufficient to protect society.

The European Economic and Social Committee (EESC) identifies several key risks posed by artificial intelligence (AI), highlighting the need for a more comprehensive and inclusive approach, prioritising human oversight and involving social partners in AI deployment.

Risks posed by AI

The EESC warns AI advancements could lead to significant job losses and greater inequalities if not properly managed. Automation and algorithmic decision-making may undermine job security, increase work intensity, and diminish workers’ autonomy.

This could erode mental health and working conditions, particularly if AI systems are used for workplace surveillance, monitoring employees continuously with performance metrics that are difficult to contest.

AI systems can also perpetuate discriminatory practices, especially in hiring, promotions, and layoffs, due to biases stemming from flawed training data or algorithms.

This lack of fairness is compounded by the opacity of many AI systems, which makes it hard for individuals to challenge decisions that affect their professional lives.

AI’s massive energy demands also contribute to environmental concerns, while its potential misuse in malicious attacks or criminal activities highlights the need for robust safeguards to protect critical infrastructure.

State of the regulatory framework

The European Union’s AI Act, the first-ever legal framework on AI, categorises AI applications into four risk levels (unacceptable, high, limited, and minimal). It imposes strict requirements on high-risk systems to ensure safety and respect for fundamental rights.

The European AI Office, responsible for enforcing the act, will collaborate with Member States to promote research and ensure that AI technologies meet ethical standards.

While regulations such as the General Data Protection Regulation (GDPR) offer some protection, they fall short of addressing AI’s specific challenges in the workplace. The AI Act lacks provisions to safeguard workers’ rights in algorithmic management and social dialogue.

Additionally, while the Platform Work Directive addresses some AI-related issues for gig workers, it does not cover the broader workforce, leaving gaps in protection.

The AI Pact, a pre-implementation framework, encourages voluntary compliance with the AI Act’s requirements. The EESC views the European Commission’s plans positively when it comes to incorporating digitalisation impacts, including AI, into the Action Plan for the European Pillar of Social Rights (2024-2029).

The European way of using AI

A recent EESC opinion stresses the importance of protecting citizens’ fundamental rights as AI is increasingly adopted in public services.

Transparency in AI decision-making and adherence to the human-in-command principle is crucial, ensuring that AI in public services complements rather than replaces human input.

Public service employers should inform workers about AI monitoring systems to foster trust and understanding of AI’s role in administrative activities.

The EESC supports a human-centric AI model that balances technological advancement with protecting citizens’ rights. This model involves dialogue with civil society stakeholders and calls for comprehensive training and upskilling programmes to meet AI’s demands on the workforce.

Given the sensitive nature of data handled by public services, the EESC stresses the need for robust cybersecurity measures to protect personal information from data breaches and cyberattacks.

Moreover, investments in secure infrastructure and resilient supply chains are necessary to ensure that AI aligns with European values. This includes ongoing stakeholder dialogue to safeguard workers’ rights and workplace practices.

The EESC calls for coordinated European investment in AI development within the bloc, urging authorities to address the risks posed by digital addiction and misuse of social media.

It recommends a comprehensive strategy to combat disinformation, including strengthening fact-checking, combating foreign information manipulation, and fostering cooperation among news media.

Finally, the EESC proposes that journalism be treated as a public good and calls for the reinforcement of the European and Digital Media Observatory (EDMO), along with developing a public European news channel to provide factual information across all national languages.

[Edited By Brian Maguire | Euractiv’s Advocacy Lab ]





Source link

spot_img

Related Posts

spot_img