AIAI EthicsChuck Gallagher

AI-Generated Content: The Ethics and Realities of Disclosure

By September 18, 2023 One Comment

AI-Generated Content: The Ethics and Realities of DisclosureThe digital age has seen its fair share of manipulated content. Yet, the recent episode where an AI-generated image of an exploding Pentagon went viral, causing confusion and impacting the financial markets, raises significant concerns. The incident underscores the need for better transparency in discerning genuine from synthetic content.

While manipulated content is nothing new, AI has amplified the ease and realism of content creation. Such advancements offer benefits in areas like art and accessibility. But they can also be misused, leading to misinformation, doubt, and exploitation.

The question arises: Would marking AI-generated content as such help mitigate the impact? For instance, had the abovementioned Pentagon image clearly labeled as AI-produced, its spread and subsequent repercussions might have been controlled or at least limited.

Addressing this, the White House recently announced that top AI companies are considering “watermarking” as a potential way to signal AI-generated content. But implementing watermarks or similar disclosure methods is neither straightforward nor foolproof. 

Understanding and delineating these methods are essential for practical use and used to avoid misinterpretation.

To elaborate:

  1. What exactly is “watermarking”? While most people envision watermarks as visible signs or logos on content, they can also mean technical signals embedded within content, invisible to the naked eye. These “direct” and “indirect” disclosures aim to offer transparency, but the semantics around them need clarity.
  2. Are all disclosures termed “watermarks”? Many use the term as a blanket reference for all disclosure methods, adding to the confusion. For instance, the White House document refers to “provenance,” a technique relying on cryptographic signatures, yet this, too, gets labeled as watermarking.
  3. Can watermarks be tampered with? Watermarks, both visible and invisible, can be manipulated or removed. The degree of ease depends on the content type, making it a less-than-foolproof solution.
  4. Are watermarks consistent for various content types? Embedding invisible watermarks is more challenging in text compared to audiovisual content. It’s essential to specify which disclosure methods are appropriate for different mediums.
  5. Who has the authority to detect and verify these marks? If everyone can see a watermark, it’s prone to misuse. Conversely, controlled detection, particularly by big AI companies, can hinder transparency and risk monopolization. The governance around this detection needs to be thought through.
  6. Do watermarks ensure privacy? Any tracing system embedded in content might compromise the privacy of its creators. The challenge is to devise a watermarking system that safeguards content integrity and its creator’s privacy.
  7. Do watermarks help in understanding content? While seemingly transparent, visible watermarks might only sometimes convey the intended message. Misinterpretation can sometimes worsen the situation, as observed when a user misconstrued Twitter’s “manipulated media” label as a statement on media bias.
  8. Could marking AI-generated content erode trust in genuine content? There’s a risk that widespread labeling might heighten public skepticism, undermining confidence in authentic content. Determining what to label and when is crucial.

While watermarking and other disclosure methods offer a direction toward transparency, they are not without challenges. These challenges shouldn’t deter us but should be seen as catalysts for collaborative work among companies, policymakers, and other stakeholders. Only a combined effort can pave the way for policies that help audiences differentiate the real from the synthetic.

Chuck Gallagher is a business ethics and AI speaker and author. For information on his programs, visit – https://chuckgallagher.com.

 

Leave a Reply