Update 27 July 2019: The German/Global initiative FairTube has something in common with us. We want to enable youtubers and watchers in a collaborative analysis of algorithms. YouTube might or might not comply with requests, but only by analyzing how the platform controls the videos we can hold them accountable.
YouTube algorithm is, according to Google itself, a regulation system of the platform. It is used to increase profits in two ways, impacting both the information flow and the type of boosted content. A social, information, and therefore political influence. This page do not talk about the social-impact of Youtube algorithm, but to the ultimate asset of Google: the network of content creators.
YT wants to control how the platform is perceived by advertisers, who are not willing to pay to appear on problematic content, which must therefore be penalized.
The influence of advertisers is similar to that of a traditional content publisher, but how much they count, and who matters the most for Google, are unknown factors still to be decoded. Also, the very definition of “problematic” content raises several issues. The concept itself, ça va sans dire, is highly relative and variable depending on the perspective of those who make this classification. It is not even clear which criteria are adopted to reach such definition, nor the actors participating in this process. Human moderators? Ever-changing policies? Artificial intelligence? Or the brands that invest the most? How does the mainstream media affect this process and which are the newspapers and the TV channels that have more weight in this sense? And why?
In the digital economy - or at least in the dominant business model of the ICT market – users’ attention is the scarce good par excellence. Content creators and the platforms that host them are constantly struggling to get hold of a slice of this asset. In the case of YouTube, the circulating information is not chosen, nor favored or promoted, on the basis of its quality, accuracy, or beauty. The algorithm prefers the most visceral contents, which ensure viral sharing. The actual indicators that will promote a content to trending topic are rather the time of visualization, the number of interactions generated, the speed of dissemination.
For example, stopping to associate ads with high economic value to potentially viral conspiracy videos means cutting a big slice of profit out.
This section should be written by a group of youtubers, not by me. Since these are technological rules that impact society (also called techno-politics), I compiled a list that must serve as an inspiration, not as a frozen list of demands.
Content recognition algorithms might instead be used by the company, as long as the metadata assigned to the contents is accessible to both the content producer and consumer, and can be updated by the producer.
It should not be an imposition over individuals’ autonomy. Anna will be responsible for her statements. If she deceived the users with false metadata, only in this case, she could be sanctioned.
Thanks to the aforementioned metadata, notification criteria can be much more structured than the simple “follow a channel”, because it is clear that a channel can produce different content. For example:
If content producers can honestly describe what they have created, content filter and selection will be in the hands of the consumers.
(also: We don’t need algorithms which learn from us and from social phenomenon. They are just an excuse to legitimize individual profiling).
We must dismantle the justification of users’ profiling because, by its very nature, it will lead to a segmentation of perception that will damage our connected society.
You can read this first write-up also as .pdf, and please don’t forget to:
Youtubers are a category of people who produce culture and who feel the effects of algorithmic oppression. They are perhaps the only category that perceives the effects in a tangible way and knows how to explain it for its consequences (less monetization). Society is gradually awakened by embarrassing scandals and terrifying developments on how technology affects the debate. Recent developments, for example, are described in TED talk by Carole Cadwalladr (who mainly deals with Facebook), and in an analysis of the YouTube CEO and its big internal problems. Facebook and Google keep repeating the promise to clean up their platforms from problematic content using algorithms. To all people who have even a minimum awareness of technology and of the social complexity of this world, this promise sounds deceiving and unsustainable. But then why do they do it? I believe that reassuring institutional users and investors is the only way for the two companies to maintain a monopoly on their respective critical masses which, together with the data collected and the ability to analyze them, represent their value.
If third parties begin to influence companies policies, they would have access to their resources, lowering corporate freedom. Algorithms continue to fail, and again, youtubers realize they are in the hands of these non-scrutinable mechanisms. For example, when Pokemon videos were banned due to their reference to the abbreviation “CP”, which stands for Combat Point, and that has been confused with the initials of Child Pornography.
The insane promise by data analysis companies that the problem of hate speech can be solved by artificial intelligence flattens a more nuanced and complex reality. Human rights are in competition. Freedom of expression ends up being in conflict with confidentiality and the right to be forgotten, and that’s why there must be, in addition to clear rules, also public understanding of these phenomena, of the reasons behind them, and of the different contexts in which the same behavior could be both tolerable and deplorable. The dynamics touched could trivially be relegated to a simple market discourse: “private companies decides what happens on their platforms” or “if the market does not want your content, find other markets or forget to sell it”. This, added to the monopoly mentioned above, opens the door to the last thorny issue, opposed by the neoliberals.
The value of the platforms that create internal networks is equal to the square of their nodes. The largest network is destined to grow exponentially, obtain a critical mass, and become the only cultural reference to meet the users’ needs. There can be no competition in this situation. The issue was already addressed years ago in the field of telephone networks and resolved with portability and interoperability. We cannot force this transition, but we must obtain sufficient visibility in policy-making in order to become another voice that pushes in this direction. A site that has had some success in showing how algorithms impact on the quality of the debate is: algotransparency. The project demonstrated how YT has often associated conspiracy videos to scientific content, reinforcing the idea of virality preferred over quality. As long as we talk about space research vs aliens it may have little impact, but it is more significant when this dynamics spreads anti-vaxing theories, hate speech, and leads to the end of the debate. Interoperability does not solve the problem of misinformation, but the possibility of creating platforms with different policies, which benefit quality, fact-checking, monetization of products, would make disinformation more difficult. In today’s platforms, visualization metrics - by their very nature – end up promoting content that is easy to share, without too many thoughts, without reflections and checks.
It could also be that youtube changes its policies and embraces one that benefits and monetizes only videos made with scientific or journalistic rigor. What we would then see would be a change in the platform’s policies with an impact on the monetization logic. Even in the remote possibility that youtube changes so drastically for the collective good, youtube.tracking.exposed would still be useful. When the change takes place in the platform and in the algorithm, with the aim of changing the shared content, a third party who can independently evaluate these changes is necessary. Interoperability, if applied uncritically and apolitically, could even worsen the situation: enable more repressive providers, which monitor and filter more, which haggle with data, or develop algorithms worse than Google’s. For this reason aspiring to interoperability is not enough: we need education of the communities, capacity for analysis and criticism of technopolitical dynamics. Perhaps we are still far from a real interoperability between platforms that breaks this monopoly, but undoubtedly a third party system of analysis is fundamental to clarify these dynamics of abuse of the dominant position, and perhaps to speed up the process of separation between network, applications, and policies.
One force among others is sparking a debate and legislative actions and has shown that platforms are not neutral and cannot manage this complexity as long as profit remains their only objective. That is misinformation for propaganda purposes, the ethics of dark patterns, and the creation of false, artificial, micro-targeted contents during election campaigns. This regulatory movement (still far from materializing) started after the well-known Brexit / Trump / polarization of alt-right concatenation, and continued with the American congress and the British parliament which carried out parliamentary questions to understand its role.
In this debate and in the allocation of responsibilities, in my opinion platforms have won once again, managing to escape a scrutiny of their internal dynamics, but recognizing that they can be abused. The promises of transparency and “doing better” are never aimed at tackling their market domination and arbitrariness of action, but rather at making it more difficult and more transparent to advertise on politically sensitive issues (such as candidates, topics of political debate, or contexts in which discrimination penalizes categories recognized as protected).
This narrative of “doing better” was one of the levers that made platforms feel more responsible towards the open sewer that is the alt-right. It originates from American social networks, which due to the absolute protection of freedom of expression, has for years legitimized all kind of hate speech, coordinated or spontaneous.
Following the massacre of ChristChurch, in New Zealand, and references to social media culture of the alt-right that the terrorist repeated in his famous video, platforms have begun to treat the cultural references of these networks as problems to be banned. And this, as well as electoral transparency, must not be seen as a victory for civil society. They are crumbs, partial and arbitrary concessions that do not affect either the business model, nor do they limit the power that these platforms have. The analogy is that of a dictator who, for once, did something that seemed reasonable to us, a legitimate use of power. Let us not be deceived: it’s us – the mass of users - who gave platforms their power, finding convenient to use an interface, and then ending up tangled in their networks. The value, for us, is the contents, the diversity of the information package, and there is no justification for a Corporation to exercise power over how this information is shared. It is even less legitimate than we are scrutinized in the process. But to question this otherwise unstoppable trend, we must first understand it, judge it, and imagine alternatives.