Wednesday, December 25, 2024
29 C
Brunei Town

The ethics of AI

AP / THE CONVERSATION – Artificial intelligence (AI) can be used in countless ways – and the ethical headaches it raises are countless, too.

Consider “adult content creators” – not necessarily the first field that comes to mind. In 2024, there was a surge in AI-generated influencers on Instagram: fake models with faces made by AI, attached to stolen photos and videos of real models’ bodies. Not only did the original content creators not consent to having their images used, but they were not compensated.

Across industries, workers encounter more immediate ethical questions about whether to use AI every day. In a trial by the United Kingdom-based law firm Ashurst, three AI systems dramatically sped up document review but missed subtle legal nuances that experienced lawyers would catch.

Similarly, journalists must balance AI’s efficiency for summarising background research with the rigor required by fact-checking standards.

These examples highlight the growing tension between innovation and ethics. What do AI users owe the creators whose work forms the backbone of those technologies? How do we navigate a world where AI challenges the meaning of creativity – and humans’ role in it?

As a dean overseeing university libraries, academic programs and the university press, I witness daily how students, staff and faculty grapple with generative AI. Looking at three different schools of ethics can help us go beyond gut reactions to address core questions about how to use AI tools with honesty and integrity.

RIGHTS AND DUTIES

At its core, deontological ethics asks what fundamental duties people have toward one another – what’s right or wrong, regardless of consequences.

Applied to AI, this approach focuses on basic rights and obligations. Through this lens, we must examine not only what AI enables us to do, but what responsibilities we have toward other people in our professional communities. For instance, AI systems often learn by analysing vast collections of human-created work, which challenges traditional notions of creative rights. A photographer whose work was used to train an AI model might question whether their labour has been appropriated without fair compensation – whether their basic ownership of their own work has been violated.

PHOTO: ENVATO
PHOTO: ENVATO

On the other hand, deontological ethics also emphasises people’s positive duties toward others – responsibilities that certain AI programs can assist in fulfilling.

The nonprofit Tarjimly aims to use an AI-powered platform to connect refugees with volunteer translators. The organisation’s AI tool also gives real-time translation, which the human volunteers can revise for accuracy.

This dual focus on respecting creators’ rights while fulfilling duties to other people illustrates how deontological ethics can guide ethical AI use.

AI’S IMPLICATIONS

Another approach comes from consequentialism, a philosophy that evaluates actions by their outcomes. This perspective shifts focus from individuals’ rights and responsibilities to AI’s broader effects.

Do the potential boons of generative AI justify the economic and cultural impact? Is AI advancing innovation at the expense of creative livelihoods?

This ethical tension of weighing benefits and harms drives current debates – and lawsuits. Organisations such as Getty Images have taken legal action to protect human contributors’ work from unauthorised AI training. Some platforms that use AI to create images, such as DeviantArt and Shutterstock, are offering artists options to opt out or receive compensation, a shift toward recognising creative rights in the AI era.

The implications of adopting AI extend far beyond individual creators’ rights and could fundamentally reshape creative industries. Publishing, entertainment and design sectors face unprecedented automation, which could affect workers along the entire production pipeline, from conceptualisation to distribution. These disruptions have sparked significant resistance. In 2023, for example, labour unions for screenwriters and actors initiated strikes that brought Hollywood productions to a halt.

A consequentialist approach, however, compels us to look beyond immediate economic threats, or individuals’ rights and responsibilities, to examine AI’s broader societal impact. From this wider perspective, consequentialism suggests that concerns about social harms must be balanced with potential societal benefits.

Sophisticated AI tools are already transforming fields such as scientific research, accelerating drug discovery and climate change solutions. In education, AI supports personalised learning for struggling students. Small businesses and entrepreneurs in developing regions can now compete globally by accessing professional-level capabilities once reserved for larger enterprises.

Even artists need to weigh the pros and cons of AI’s impact: It’s not just negative. AI has given rise to new ways to express creativity, such as AI-generated music and visual art.

These technologies enable complex compositions and visuals that might be challenging to produce by hand – making it an especially valuable collaborator for artists with disabilities.

CHARACTER FOR THE AI ERA

Virtue ethics, the third approach, asks how using AI shapes who users become as professionals and people. Unlike approaches that focus on rules or consequences, this framework centers on character and judgment.

Recent cases illustrate what’s at stake. A lawyer’s reliance on AI-generated legal citations led to court sanctions, highlighting how automation can erode professional diligence. In health care, discovering racial bias in medical AI chatbots forced providers to confront how automation might compromise their commitment to equitable care.

These failures reveal a deeper truth: Mastering AI requires cultivating sound judgment.

Lawyers’ professional integrity demands that they verify AI-generated claims. Doctors’ commitment to patient welfare requires questioning AI recommendations that might perpetuate bias. Each decision to use or reject AI tools shapes not just immediate outcomes but professional character.

Individual workers often have limited control over how their workplaces implement AI, so it is all the more important that professional organisations develop clear guidelines. What’s more, individuals need space to maintain professional integrity within their employers’ rules to exercise their own sound judgment. – Leo S Lo

spot_img

Latest

spot_img