Responsive Image

GEMA AI charter

Basic principles for a constructive and fair interaction of human creativity and generative artificial intelligence.

Whereas

The GEMA AI charter contributes to the increasingly urgent discussion on ethical and legal principles for dealing with generative artificial intelligence (AI);

The profound impact of this technology on economy, culture and society is already rather evident;

The basic principles formulated in this charter place the focus on the creative human being;

This is the only way to enable real innovation and sustainability under fair conditions and to exhaust the potential of generative AI for all parties involved to its full extent; Creative human performance and achievements are the basis for each generative AI;

AI – including generative AI – has become a part of our everyday reality, both in private and work-related situations; GEMA is using AI to optimise processes;

GEMA provides its members with comprehensive information on ongoing developments and, by way of partnerships or cooperation initiatives, offers brokerage for AI-based tools that can facilitate or complement human creative work;

At the same time, GEMA members, that is more than 95,000 music creators and publishers, increasingly use AI tools to support the creative process;

We therefore also regard artificial intelligence as an opportunity, however, without ignoring the risks.

Therefore, the GEMA AI charter shall serve as food for thought and provide guidelines for a responsible use of generative AI. It shall contribute to direct the transformative power of this fascinating technology into constructive paths.

10 basic principles for the use of AI

Close

#1: Digital humanism – where people take centre stage

The development of generative AI is obligated to the well-being of people.

GEMA supports music creation while using the new technological possibilities. As a tool, AI can expand and enhance people’s possibilities and creative skills. It has therefore become an integral part of the creative process. It must, however, not displace or drive out human creativity, especially not by exploiting pre-existing creative work. The technological development and the use of generative AI must be designed in such a way that they go hand in hand with social, economic and cultural progress for as many people as possible.

Close

#2: Schutz des Geistigen Eigentums

Intellectual property rights are protected.

They promote creativity, cultural diversity and innovation. They ensure that creators are free and independent from commercial exploitation and from third parties using their works without permission. Copyright protects the creative human being and gives them the sole right to decide on the use of their works within the legal framework.  This proven principle must also apply in the context of generative AI.

Close

#3: Fair participation in value creation

All parties involved in the value chain receive a fair share of the revenues.

Generative AI opens up new sources of income. Artistic, creative and publicist content is the basis, or the raw material, for potential business models. A fair remuneration model must start at the point where value is created. As a consequence, it must not just be limited to train up the AI model. On the contrary, the economic advantages must be considered which arise through AI content being generated (e.g. income from subscriptions) and are achieved in the market through ensuing exploitation (e.g. as background music or AI generated music on music platforms on the internet). The decisive factor for a participation is thus not where the training takes place but where the tools for generating or the generated content is offered. In addition, the competitive situation with the works created by people must be taken into consideration. After all, these very works made AI content possible in the first place. This also must apply in cases where synthetic data was used for training the AI. Synthetic data are, in turn, based on works created by people whose creative power continues in such content when generating AI music.

Close

#4: Transparency

AI providers act in a transparent manner.

It must be transparent and clear which specific content or works and other data is used for training and which measures AI providers undertake to ensure that the development and deployment of their technologies are in harmony with prevailing law. This includes a duty of the providers to enquire which content can be used freely for training, and to request respective licences from the collective management organisations. It should also be disclosed to the users of AI applications whether there is an AI interaction, whether output is generated through AI or whether AI is used to curate cultural or media content.

Close

#5: Negotiations at eye level

Responsible handling of the market position.

The relatively young market for generative AI is dominated by just a few large digital corporations. They have the necessary computing capacities, financial means and infrastructures to establish AI technologies quickly and successfully, but also while ignoring copyright, in the market. The resulting imbalances and asymmetries in the negotiation power disadvantage smaller players and individuals. AI requires an intelligent regulatory framework that can keep up with it. Apart from clear guidelines and instruments subject to competition law, this particularly necessitates that collective negotiations are strengthened to enable a situation where the interests of the parties involved can be represented in a concerted effort vis-à-vis the digital corporations. The large digital corporations must find their way back to respecting copyright.

Close

#6: Respecting moral rights

Moral rights are respected.

Each person must have the option to take swift and effective action against infringements of their moral rights. In the context with generative AI this particularly affects the right to one’s own voice, one’s own name and one’s own image – they are affected by phenomena such as “deep fakes” – and one’s own work. It affects informational self-determination in general, i.e. the authority to decide directly which data about oneself is made available to the public.

Close

#7: Respecting cultural diversity

The diversity of cultures shall be respected.

Generative AI must not lead to cultural forms of expression and social trends being homogenised, reproducing the same thing over and over and going round in self-referential circles. AI models can only be as good and as diverse as the content that they are trained with. It must be possible to take equal account of cultural niches and popular content, and at fair conditions, and to preserve the plurality of opinion. Europe’s cultural and linguistic diversity must not be fall by the wayside when it comes to training large AI models.

Close

#8: No bypassing of EU rules

AI providers must be reachable for EU legal requirements.

The digital space has no borders and enables global business models. This openness must not be exploited by large digital corporations in order to bypass democratically determined game rules or to replace them unilaterally. In the AI sector, it must be ensured that European regulations, reservations of use and the obligation to pay remuneration must not simply be levered out by having the training take place outside the EU. Whoever offers AI systems that will be rolled out in the EU or that affect people in the EU must stick to the EU regulations.

Close

#9: Sustainability

Developing and running generative AI must be designed in a sustainable manner.

They must take place under fair and social conditions and be ecologically aligned.  After all, training and using generative AI both require enormous computing capacities which can reach the electricity consumption of entire countries. Such high energy consumption necessitates that the providers of AI technologies take action to save energy and increase efficiency. This is another area where transparency on the part of the providers is needed as well as an exhaustive discussion in society how generative AI can be designed in a more sustainable way, socially, economically and ecologically.

Close

#10: Responsibility

AI providers accept responsibility.

The development and the deployment of generative AI must be in accordance with the ethical principles and legal obligations and particularly entail obtaining the respective licences. AI providers must be aware of the impact of their technology and accept responsibility for it. AI providers therefore justifiably do not enjoy any right to a liability privilege (“Safe Harbour"). As a consequence, they must not shift their responsibility to users.

Download GEMA's AI charter (pdf)
Musicn in your mailbox now

AI section in the GEMA Newsletter

With the GEMA newsletter, you receive the latest developments in AI and music directly to your e-mail inbox once a month. Also included in the newsletter: exclusive interviews, interesting background information, important tips on funding opportunities or current events and more.

Responsive Image