Uncategorized

Jens -Geschichten eines ganz normalen Jungen- (German Edition)

Just like most governments with which Woolit has a trustful relationship since it, together with some authorities, has dried up most of the organized crime. Money laundering has become difficult since Woolit dominated the crypto-market. Who needs decentralisation anyways? There is a heated debate going on about Facebook and privacy since the revelations about Cambridge Analytica surfaced. The reaction is a cry for more privacy regulation. But they are wrong.

I feel that there are a lot of misconceptions about the effectiveness of data protection in general. This is not surprising since there are few similar rules in the US and so the debate is based more on projections than on actual experiences. I want to add the perspective of someone who has lived long enough within a strict privacy regime in Germany to know the pitfalls of this approach.

From this angle I want to reflect the Cambridge Analytica case regarding of how effective EU style privacy regulation would have been to prevent this event from happening. There are three distinct drivers that fuels this loss of control and they are all closely entangled with the advancements of digital technology. We can no longer control which information is recorded about us and where. This holds certainly true and you can watch an instance of this ever unraveling enlightenment in the outrage about the related issue of how the facebook android app has been gathering all your cellphone data.

But it is the remaining two drivers of Kontrollverlust that are at the heart of the Facebook scandal.

ctrl+verlust | Res gesta per amissionem

In the digital world, practically everything we come into contact with is a copy. This huge copying apparatus is growing more powerful every year, and will keep on replicating more and more data everywhere. We can no longer control where our data travels. Dr Alexandr Kogan, the scientist who first gathered the data with his Facebook app, illegally sold it to Cambridge Analytica. The criminal intent with which all parties were acting suggests that they would have done it one way or the other. Furthermore, Christopher Wylie — the main whistleblower in this case — revealed that an ever growing circle of people also got their hands on this data.

Including himself and even black market sites on the internet.

main navigation

The second driver of Kontrollverlust suggests that we already live in a world where copying even huge amounts of data has become so convenient and easy to do that it is almost impossible to control the flow of information. Regardless of the privacy regulation in place we should consider our data being out there and with anybody who has an interest in knowing about it. Sure, you may trust big corporations in trying to prevent this from happening, since their reputation is on the line and with the GDPR there may also be huges fines to be paid.

But even if they try very hard, there will always be a hack, a leak, or just the need for third parties to access the data and thus the necessity to also trust them. But much more essential in this case is what I call the third driver of Kontrollverlust:. That is not true. Thanks to Big Data and machine learning algorithms, even the most mundane data can be turned into useful information. In this way, conclusions can be drawn from data that we never would have guessed it contained.

We can no longer anticipate how our data is interpreted. There is also a debate about how realistic the allegations concerning the methods of Cambridge Analytica are and how effective this kind of approach would be I consider myself on the rather sceptical side of this debate. Summing it up, the method works as follows: By letting people do psychological tests via Mechanical Turk and also gaining access to their facebook profiles, researchers are able to correlate their Facebook likes with their psychological traits from the test.

So the results would assumingly read somewhat like this: In the next step you can produce advertising content that is psychologically optimized for some or for all of the different traits in the model. For instance they could have created a specific ad for people who are open but not neurotic and one for people who also match high on the extraversion scale and so on. In the last step you isolate the likes that correlate with the psychological traits and use them to steer your ad campaign. Facebook gives you the ability to target people by using their likes, so you can use the infrastructure to match your particularly optimized advertisement content to people who are probably most prone to it.

For some compelling arguments against this possibility read this and this and this article. You think the GDPR would prevent such profiling from happening? Since Cambridge Analytica only needs the correlation between likes and traits, it could have completely anonymized the data an be fine fine with GDPR.

They totally can afford to lose every bit of identifiable information within the data and still be able to extract the correlation at hand, without any loss of quality. It only applies to where the individual is concerned. First we need to ask ourselves what we learned from the case in prospect of data regulation? We learned that likes are a dangerous thing, because they can reveal our psychological structure and by doing that, also our vulnerabilities.

So, an effective privacy regulation should keep Facebook and other entities from gathering data about the things we like, right? Although there are certainly differences in how significant different kinds of data may correlate to certain statements about a person, we need to acknowledge the fact that likes are nothing special at all. They are more or less arbitrary signals about a person and there are thousands of other signals you could match against OCEAN or similar profiling models.

You can match login times, or the amount of tweets per day, browser and screen size, the way someone reacts to people or all of the above to match it against any profiling model. You could even take a body of text from a person and match the usage of words against any model and chances are that you get usable results. The third driver of the Kontrollverlust basically says that you cannot consider any information about you innocent, because there can always appear a new statistical model, a new data resource to correlate your data with or a new kind of algorithmic approach to render any kind of seemingly harmless data into a revelation machine.

This is what Cambridge Analytica allegedly did and what will continue to happen in the future, since all these data analysis methods will continue to evolve. This means that there is no such thing as harmless information. Thus, every privacy regulation that reflects this danger should prevent every seemingly arbitrary bit of information about you from being accessible by anyone. Public information — including public speech — has to be considered dangerous. And indeed the GDPR is trying to do just that. This has the potential to turn into a threat to the public, to democracy and to the freedom of the individual.

When you look back to the origins of German data protection laws you will find that the people involved have been concerned about the freedom of the individual being threatened by the government. Since state authority has the monopoly on force — e. Data protection was really a protection of the individual against the government and as such it has proven to be somewhat effective.

The irony is that data protection was supposed to increase individual freedom. This is also true on the individual level: Living in constant fear about how your personal data may fall into someones hands is the opposite of freedom. I do know people — especially within the data protectionist scene — who promote this view and even live that lifestyle. They spend their time hiding from the public and using the internet in an antiseptically manner.

They are not dissidents, but they chose to live like ones. They would happily sacrifice every inch of public sphere to get to the point of total privacy. But the truth is: This is a very fragile strategy. But this needs a different explanation. I do think that there are harmful ways to use profiling and targeting practices to manipulate significant chunks of the population.

We do need regulation to address those. The main difference between disciplinary regimes like, say, the nation state and regimes of control like, say, Facebook is the role of the individual. The state always refers to the individual, mostly as a citizen, that has to play by the rules. As soon as the citizen oversteps the state uses force to discipline him back to being a good citizen.

This concept also applies down to all the states institutions: The school disciplines the student, the barrack the soldier, the prison the prisoner. The relation is always institution vs individual and it is alway a disciplinary relation. The main objective Facebook is really thriving for has …. What it cares for is statistics. The goal is to drive the conversion rate for an advertisement campaign from 1. Getting this difference wrong is one of the major misconceptions about our time. We are used to think of ourselves as individuals.

Instead of the individual which means the un-dividable it sees the dividual the dividable: We may think of these characteristics as part of our individual selves but they are everything but unique. And Facebook cares for them precisely because they are not unique, so they can put us into a target group and address this group instead of us personally. It is not you, Facebook cares about, but people like you. In terms of policy I mostly propose a much more straightforward approach of regulation. We need to identify the dangers and the harmful practices of targeted advertisement and we need to find rules to address them specifically.

Every effective policy should consider the Kontrollverlust, that is to assume the data to be already out there and used in ways beyond our imagination. Instead of trying to capture and lock up that data we need ways to lessen the harm such data could possibly cause. Or as Deleuze puts it in his text about societies of control: Ja, was denn nun?

Leider tun sie das bislang nur sehr begrenzt. Das andere Problem ist ein Henne-Ei-Problem: Das leidige Thema Netzwerkeffekte. Und hier die gute Nachricht: Kommen wir zum ersten Problem: Aber Staaten sind da anders. Ich zum Beispiel sehe das so. Hinzu kommen die Standardargumente: Denn was oft vergessen wird, ist, dass der Staat eben auch ein riesiger Konsument von Software ist und sein Benutzen oder Nichtbenutzen von Systemen ein enormes Gewicht in die Waagschale wirft.

Aber warum in Deutschland halt machen? Das ist weit mehr, als zu was er derzeit im Stande ist. The word appears six times in the German coalition agreement for the new government — and always in the context of new and promising digital technologies. Blockchain technology was born with its first popular application: Bitcoin is based on the fact that all transactions made with the digital currency are recorded in a kind of ledger.

However, this ledger is not located in a central registry, but on the computers of all Bitcoin users. Everyone has an identical copy. And whenever a transaction happens, it is recorded more or less simultaneously in all these copies. It is only when most of the ledgers have written down the transaction that it is considered completed.

Each transaction is cryptographically linked to the preceding transactions so that their validity is verifiable for all. For instance, if someone inserts a fake transaction in between, the calculations are no longer valid and the system raises an alarm. Early on, even bitcoin skeptics admitted that besides the digital currency itself, it is the blockchain technology behind it that holds the real future potential.

Since then, many people have been wondering where else it could be applied. The Internet has always been regarded as a particularly decentralized technology, but this holds no longer true for most services today: All of us use one search engine Google , one social network Facebook and one messenger WhatsApp. And all these services are based on centralized data storage and data processing. Blockchain technology seems to offer a way out—all services that previously operated via central databases could now be organized with a distributed ledger.

The ideas go so far as to depict complex business processes within blockchains. For example, automated payouts to an account when a stock reaches a certain value. The hype about blockchain is almost as old as the one around Bitcoin, so we are talking about the inevitability of this technology for like six to seven years now.

Hundreds, if not thousands of start-ups have been established since then, and just as many applications of the blockchain have been claimed. As an interested observer one asks himself, why there is yet no other popular application besides cryptocurrencies which themselves are more or less speculation bubbles without any real world application? Why do all blockchain technologies remain in the project phase and none of them finds a market? The answer is that Blockchain is more an ideology than a technology.

Ideology means, that it is backed by an idea of how society works and how it should work. One of the oldest problems in social sciences is the question of how to establish trust between strangers. Society will only exist if this problem is adequately resolved. Think of banks, the legal system, parties, media, etc.

All these institutions bundle trust and thus secure social actions between strangers. However, these institutions gain a certain social power through their central role. This power has always caused a headache to a certain school of thinking: The market — as the sum of all individuals trading with each other — should regulate everything by its own.

Accordingly, they are also very critical of institutions such as central banks that issue currencies. The basic idea behind Bitcoin is to eliminate the central banks — and indeed banks in general — from the equation. Blockchain is the libertarian utopia of a society without institutions. Instead of trusting institutions, we should have confidence in cryptography. Instead of our bank, we should trust in the unbreakability of algorithms.


  • Humiliated in Diapers?
  • The Tiger Full Growl Stop Smoking Process: Let the Tiger Crush Your Desire to Smoke?
  • !
  • .

Instead of trusting in Uber or any other taxi app, we should trust a protocol to find us a driver. So when you invest in Blockchain, you make a bet against trust in institutions. Technically speaking, a blockchain can do the same things any database has long been able to do—with a tendency to less. The only feature that distinguishes blockchain here is, that no one has ever to trust a central party. But this generates also costs. It takes millions of databases instead of one. Instead of writing down a transaction once, it has to be written down millions of times.

All this costs time, computing power and resources. So if we do not share the libertarian basic assumption that people should mistrust institutions, the blockchain is just the most inefficient database in the world. The word appears six times in the German coalition agreement for the new government — and always In early , shortly after the inauguration of Donald Trump, the rumor began to spread that Facebook founder Mark Zuckerberg himself was planning to enter the presidential race in This rumor is only a symptom of the general lack of understanding of our times.

For Mark Zuckerberg has long been a politician. He has an enormous impact on the daily lives of two billion people. He makes decisions that affect how these people get together, how they interact, even how they see the world. So Zuckerberg is already perhaps the most powerful politician in the world. Any job in traditional politics, including the office of US president, would be a step down. In this text, I will try to determine and analyze the ways in which platforms act politically, examining how they organize their power base and in which fields their policies are already changing the world.

But first we should eliminate three fundamental misconceptions about platform politics. Platforms are not only the objects of politics, but also powerful political subjects. For this structure is neither arbitrary nor neutral: Defining the structure of communication is a political act in and of itself, one that enables certain interactions and reduces the likelihood of other kinds of communication. This is a profound intervention into our social lives, and therefore in itself political.

So it makes sense to think about platforms not merely as companies that provide Internet services, but as political entities or even institutions. Platforms can be regarded as the Fifth Estate. But unlike the other four estates, platforms are not limited by the boundaries of the nation state; they act and think globally by design. Platforms rather tend to downplay their political power and refuse to take responsibility. They are political actors in spite of themselves. One reason why platforms are still not taken seriously as political actors is the general lack of understanding of their power dynamics.

When politicians come up against platforms, they like to throw the weight of their political legitimation around. This primacy is derived from the fact that the politician came into office by way of a sovereign, collective decision. But platforms, too, generate a kind of legitimation through collective decision-making, even though this works slightly differently. In his book Network Power , David Singh Grewal argues that the adoption of standards can be understood as a collective decision.

The power of these aggregated, collective decisions is nothing new. It relates to the languages we speak, the manners we cultivate or accept, and of course, to the choice of network service we choose to use. In the end, we join Facebook not because of its great product quality, but because all our friends are on Facebook.

Once a certain standard is widely established, the pressure on the individual becomes so great that there is little choice but to adopt that standard as well — the alternative often being social ostracism. At least that applies to open standards. Social pressure always comes from the community as a whole, so it can never instrumentalized individually. Facebook could withhold access to my friends at any time, or place temporal or local restrictions on it.

They are under the misconception that dealing with Google, Facebook, Apple and Co is much the same thing as the corporate power structures they might have encountered at Siemens or at Deutsche Bank. And so they resort to the playbook of political regulation to match these powers. But platform providers are not just large enterprises; their power is based on more than just money and global expansion. Rather, the platform is facing down the nation state itself, as a systemic competitor — even if neither side is prepared to admit it yet.

This is why all efforts in conventional politics to regulate platforms must lead to a paradox. Even while politicians are shaking their fists at Google and Facebook, they are granting these platforms more sovereignty by the minute. Any new constraints devised by policymakers just serve to strengthen the political power and legitimacy of the platform. At first sight, this makes perfect sense, because platforms are the logical point of contact for regulating the digital world, thanks to their platform power and deep, data-driven insights.

At the same time, this is fatal, because the state further increases the power of the platforms in this way, making itself dependent on its very competitors. The political influence of platforms takes many forms. I would like to examine three departments more closely where platforms are already very influential today and will gain even more influence in future without this claiming to be an exhaustive list: It is important to regard the network as the subject of net politics in this case. Which implies that these issues can only be solved from within.

This is not only pertinent to those problems with hate speech, trolling and fake news we are currently discussing, but also to older issues such as identity theft or doxxing publishing personal information with malicious intent. Since these problems mostly arise on platforms, it is logical to expect the according countermeasures to come from the platforms themselves. While this does indeed happen occasionally, overall these interventions are still seen as insufficient.

In fact, platforms display a lot of reluctance towards regulations in general. They are hesitant to make use of the political power they already wield, for instance by establishing and enforcing stricter community rules.

Dame - King of the Hill [Halo Song]

After the Nazi march that escalated in Charlottesville, many platform providers were pushed to action and subsequently banned right-wing accounts and websites from their services. Most notably, the Nazi website Daily Stormer was blocked, and even kicked out of the content delivery network Cloudflare. The results so far give little cause for hope. Of course, the standard case is a state attempting to regulate a platform, as we have seen above.

The EU, for example, has several lawsuits pending against Facebook and Google, and the conflicts between the US government and platform providers are becoming increasingly apparent as well. Relations between platforms and states have not always been this bad in the past. In her influential speech on Internet and Freedom in , Clinton described the platform providers as important partners in terms of global spreading of democracy and human rights. Jared Cohen played a particularly pivotal role here.

When in , a revolution was threatening to break out in Iran, Cohen called Twitter and convinced them to postpone their scheduled maintenance downtime. When the Arab Spring finally broke out in early , Cohen was already working at Google, where he helped coordinate various inter-platform projects. Facebook, Twitter and Google in their own ways all tried to support the uprisings in the Arab World, and even cooperated with one another to do so.

One example is the case of the service speak2tweet: Google provided a telephone number which people from Egypt could call to record a message. These messages were then published on Twitter, thus bypassing the Egyptian Internet shutdown. Since the Snowden revelations of at the latest, relations between Silicon Valley and Washington have cooled down significantly.

Platforms have since been trying to protect and distance themselves from state interference. This is mostly achieved through the increasing use of encrypted connections, and through elevated technical and legal security. The conflict than escalated in spring , due to the iPhone that FBI investigators found with the perpetrator of the San Bernardino attacks.

The phone was locked and encrypted, and so the investigators ordered Apple to assist with the decryption. Apple refused — in order to unlock the phone, Apple would have had to introduce a security vulnerability in its security software. In the end, the FBI had to work with a third-party security company to unlock the iPhone. Beside these varied forms of cooperation and conflict between platforms and states, platform-platform relations should also be taken into account, of course.

For instance, while you might get into trouble for posting homophobic contents on Facebook, you might get into trouble on VKontakte for posting the Rainbow Flag. A segregation of society along the boundaries of different platforms and their according policies seems to be a plausible scenario, and may well provide a lot more material for foreign net policy in future. The expression simply references a new form of war, conducted with digital means. That said, the core misunderstanding here is the assumption that cyber-wars primarily take place between nation states.

Even today, that is hardly the case. Almost without exception, a software or service provided by a specific platform is involved. Further, many attacks are directed at platforms as their primary target. Perhaps the most prominent case is the attack from China on the GitHub developer platform.

GitHub is a popular website where software developers can store and synchronize versions of their code and share with other users. While it is not unheard of that China simply shuts off services it objects to by activating the Great Firewall, GitHub was a notable exception. But with a censorship infrastructure that lets millions of requests per second come to nothing, the Chinese came up with another idea: GitHub was hit by millions and millions of requests from all over China, pushing the website to its utmost limits.

Finally, platforms are not only the target of cyber attacks, but more and more frequently the last line of defense for other targets. His analysis of the attack revealed that the attack had been carried out mainly by Internet routers and security cameras. It was the largest bot army the world had ever seen. A platform operated, incidentally, by Jigsaw, the Google spin-off think tank founded by Jared Cohen. The platforms provide the infrastructure that comes under attack, and more importantly, they are increasingly becoming targets themselves. Most importantly, the platforms are the only players with sufficient technical capacity and human resources to fend off these kinds of attacks, or prevent them to save the day.

Platforms are already holding a prominent position within the social order, which in itself is becoming more and more digitalized. Platforms regulate critical infrastructure for the whole of society, and provide protection and order on the Internet. Increasingly the platform is in direct competition with the state, which generates dependencies that could turn out to be a threat for nation states.

Whether the state will maintain its independence and sovereignty in the long term or not, will depend on its ability to operate and maintain digital infrastructure on its own. In the long run, the state needs to become a platform provider itself. Platforms, on the other hand, would be well advised to look at the democratic institutions of states that have evolved over time, to address their own domestic net policy issues. It should be noted that competition between the two might even be advantageous for the citizen or user in the long run.

While the state is trying to protect me from the overbearing access of the platforms, platform providers are trying to protect me from the excessive data collection of the state. Man kann ihn auch hier als PDF abrufen. Denn diese Struktur ist weder beliebig noch neutral. Allerdings sind sie im Gegensatz zu den anderen vier Gewalten nicht national organisiert, sondern denken und agieren per se global. Vielmehr neigen Plattformen dazu, ihre eigene politische Macht herunterzuspielen und die Verantwortung nicht anzunehmen zu wollen.

Sie sind politische Akteure wider Willen. Doch auch Plattformen generieren eine Form von Legitimation durch kollektive Entscheidungen, die allerdings leicht anders funktioniert. Die Macht dieser aggregierten kollektiven Entscheidungen ist nichts Neues. Der soziale Druck geht immer von der ganzen Gemeinschaft aus und kann nicht von Einzelnen gerichtet werden. Sie glauben, es bei Google, Facebook, Apple und Co.

Entsprechend greifen sie zum Rezeptbuch der politischen Regulierung, um dieser Macht entgegenzutreten. Anstrengungen der klassischen Politik, Plattformen zu regulieren, enden deswegen in einem Paradox. Das ist einerseits durchaus sinnvoll, weil Plattformen durch ihre Plattformmacht und tiefen, datenreichen Einsichten die logischen Ansprechpartner zur Regulierung des Digitalen sind. Wesentlich ist, dass das Netz immer als Gegenstand der Netzpolitik gesehen wird. Nichtsdestotrotz scheint sich ein gewisses Problembewusstsein durchgesetzt zu haben. Nach dem eskalierten Nazi-Aufmarsch in Charlottesville haben viele Plattformanbieter gehandelt und rechtsradikale Accounts und Websites von ihren Services verbannt.

Bisherige Ergebnisse geben wenig Anlass zur Hoffnung. Doch die Beziehungen zwischen Plattformen und Staaten waren in der Vergangenheit nicht immer so schlecht. In ihrer bedeutenden Rede von zu Internet und Freiheit bezeichnete sie die Plattformberteiber als wichtige Partner, wenn es darum geht, Demokratie und Menschenrechte in die Welt zu tragen. Gemeint ist eine neue Form des Krieges mit digitalen Mitteln. Das ist bereits heute nicht der Fall. Fast immer ist eine Software oder Dienst betroffen, der von einem Plattformanbieter bereitgestellt wird. Etwas, das sich nicht mal China leisten kann.

Breadcrumb

Da aber China in ihrer Zensurinfrastruktur Millionen von Anfragen pro Sekunde ins Leere laufen lassen muss, kamen sie auf die Idee, die geblockten Anfragen stattdessen auf ein Ziel im Internet zu richten. Bei einer Analyse des Angriffs stellte sich heraus, dass der Angriff vornehmlich von Internetroutern und Sicherheitskameras ausgegangen war. Sie sind es, die die Infrastruktur bereitstellen, die angegriffen wird. Plattformen besetzen bereits heute zentrale Stellen der gesellschaftlichen Ordnung, die selbst zunehmend zur digitalen wird.

Der Staat muss auf lange Sicht selbst zum Plattformanbieter werden. User sogar gewinnbringend sein kann. The Internet has always been my dream of freedom. By this I mean not only the freedom of communication and information, but also the hope for a new freedom of social relations. Despite all the social mobility of modern society, social relations are still somewhat constricting today. From kindergarten to school, from the club to the workplace, we are constantly fed through organizational forms that categorize, sort and thereby de-individualize us. From grassroots groups to citizenship, the whole of society is organized like a group game, and we are rarely allowed to choose our fellow players.

The Internet seemed to me to be a way out. If every human being can relate directly to every other, as my naive-utopian thought went, then there would no longer be any need for communalized structures. Individuals could finally counteract as peers and organize themselves. Communities would emerge as a result of individual relationships, rather than the other way around. Ideally, there would no longer be any structures at all beyond the customized, self-determined network of relations of the individual.

The election of Donald Trump was only the latest incident to tear me rudely from my hopes. This refers to the massive support of the official election campaign by an internet-driven grassroots meme campaign. And even though you can argue that the influence of this movement on the election was not as great as the trolls would have you believe, the campaign clearly demonstrated the immense power of digital agitation. It was the complete detachment from facts and reality unfolding within the Alt-Right which, driven by the many lies of Trump himself and his official campaign, has given rise to an uncanny parallel world.

The conspiracy theorists and crackpots have left their online niches to rule the world. In my search for an explanation for this phenomenon, I repeatedly came across the connection between identity and truth. People who believe that Hillary and Bill Clinton had a number of people murdered and that the Democratic Party was running a child sex trafficking ring in the basement of a pizza shop in Washington DC are not simply stupid or uneducated.

They spread this message because it signals membership to their specific group. New social structures with similar tribal dynamics have also evolved in the German-speaking Internet. And yet they are closely connected online, communicating constantly with one another while splitting off from the rest of the public, both in terms of ideology and of network. Fake news is not, as is often assumed, the product of sinister manipulators trying to steer public opinion into a certain direction.

Rather, it is the food for affirmation-hungry tribes. Demand creates supply, and not the other way around. For the study at hand, we analysed hundreds of thousands of tweets over the course of many months, ranking research question by research question, scouring heaps of literature, and developing and testing a whole range of theories. On the basis of Twitter data on fake news, we came across the phenomenon of digital tribalism, and took it from there. However, we will not be able to answer all the questions that this phenomenon gives rise to, which is why this essay is also a call for further interdisciplinary research.

In early March , there was some commotion on the German-language Twitter platform: Many users agreed that the silence was politically motivated. The bigger picture is that Sweden, like Germany, had accepted a major contingent of Syrian refugees. Ever since, foreign media, and right-wing outlets in particular, have been claiming that the country is in the grip of a civil war. Reports about the terrorism alert being kept under wraps fed right into that belief. For proof, many of these tweets did in fact refer to the section of the German Foreign Office website that includes the travel advisory for Sweden.

The website also notes the date of the most recent update: After some time, the Foreign Office addressed the rumors via a clarification of facts on its website. Several media picked up on the story and the ensuing corrections. But the damage was done. The fake story had already reached thousands of people who came away feeling that their views had been corroborated: What happened in early March fits the pattern of what is known as fake news — reports that have virtually no basis in fact, but spread virally online due to their ostensibly explosive content.

He wanted to know how fake news spread, and whether corrections were an effective countermeasure. He collected the Twitter data of all accounts that had posted something on the issue, and flagged all tweets sharing the fake news as red and all those forwarding the correction as blue. He then compiled a graphic visualization of these accounts that illustrates the density of their respective networks.

In other words, the smaller the distance between two dots is, the more closely-knit the networking connections are between the accounts they refer to. The result is striking: The disparity between the two is revealed both by the coloring and by the relative position of the accounts. On the left, we see a fairly diffuse blue cloud with frayed edges.

Several large blue dots relatively close together in the center represent German mass media such as Spiegel Online, Zeit Online, or Tagesschau. The blue cloud encompasses all those accounts that reported or retweeted, which is to say, forwarded, the correction. On the other side, we see a somewhat smaller and more compact red mass consisting of numerous closely-spaced dots. These are the accounts that disseminated the fake news story. They are not only closely interconnected, but also cut off from the network represented by the large blue cloud. What is crucial here is the clear separation between the red and blue clusters.

There is virtually no communication between the two. Every group keeps to itself, its members connecting only with those who already share the same viewpoint. I accepted, even though I was skeptical of the filter bubble hypothesis. At first glance, the hypothesis is plausible. A user is said to live in a filter bubble when his or her social media accounts no longer present any viewpoints that do not conform to his or her own. Eli Pariser coined the term in , with view to the algorithms used by Google and Facebook to personalize our search results and news feeds by pre-sorting search results and news items.

Filter bubbles exist on Twitter as well, seeing as that every user can create a customized small media network by following accounts that stand for a view of the world that interests them. Divergent opinions, conflicting worldviews, or simply different perspectives simply disappear from view. This is why the filter bubble theory has frequently served as a convincing explanation in the debate about fake news. If we are only ever presented with points of view that confirm our existing opinions, rebuttals of those opinions might not even reach us any more. So filter bubbles turn into an echo chamber where all we hear is what we ourselves are shouting out into the world.

Before examining the filter bubble theory, however, we first tried to reproduce the results using a second example of fake news. This time, we found it in the mass media. The BILD story quickly made the rounds, mainly because it gave the impression that the Frankfurt police force had kept the incident quiet for more than a month. In fact, the BILD journalist had been told the story by a barkeeper who turned out to be a supporter of the right-wing AfD party, and BILD had printed it immediately without sufficient fact-checking.

As it turned out, the police were unable to confirm any of this, and no other source could be found for the incident. Even so, other media outlets picked up on the story, though often with a certain caution. In the course of the scandal, it became clear that the barkeeper had made up the entire story, and BILD was forced to apologize publicly. We needed a third category in between. We collected all the articles on the topic in a spreadsheet and flagged them as either spreading the false report red , or just passing it on in a distanced or indecisive manner yellow.

Of course, we also collected articles disproving the fake news story blue. We also assigned some of the tweets to a fourth category: The mistake the BILD newspaper made sparked a broader debate on how a controversial, but well-established media company could become the trigger point of a major fake news campaign. These meta-debate articles were colored in green. The cloud of corrections, superimposed with the meta-comments, is visible in blue and green, brightened up here and there by yellow specks of indecision. Most noticeably, the red cluster of fake news clearly stands out from the rest again, in terms of color and of connectivity.

Our fake news bubble is obviously a stable, reproducible phenomenon. So we were still dealing with the theory that we were seeing the manifestation of a filter bubble. To be honest, I was skeptical. The existence of a filter bubble is not what our examples prove: The filter bubble theory makes assertions about who sees which news, while our graph only visualizes who is disseminating which news. This information, however, is also encoded in the Twitter data, and can simply be extracted.


  • Bonded.
  • ;
  • Hundert leben download.
  • Origine du nom de famille Adam (Oeuvres courtes) (French Edition)!

For any given Twitter account, we can see the other accounts it follows. In a second step, we can bring up all the tweets sent from those accounts. Once we have the tweets from all the accounts the original account follows, we can reconstruct the timeline of the latter. In other words, we can peer into the filter bubble of that particular account and recreate the worldview within it. In a third step, we can determine whether a particular piece of information penetrated that filter bubble or not. In this manner, we were able to retrieve the timelines of all accounts that had spread the fake news story, and scan them for links to the correction.

The result is surprising: Almost all the disseminators of the false terrorism alert story — So we repeated that test for the Fressgass story, too. In our first example, This finding contradicts the filter bubble theory. It was obviously not for technical reasons that these accounts continued to spread the fake news story and not the subsequent corrections. At the very least, the filter bubbles of these fake news disseminators had been far from airtight.

Without expecting much, we ran a counter-test: What about those who had forwarded the correction — what did their timelines reveal? Had they actually been exposed to the initial fake news story they were so heroically debunking?


  • Wissenschafts-Posse: Ahnungslose Chemiker entdecken Verbindung zum zweiten Mal - SPIEGEL ONLINE.
  • La confessione di un figlio del secolo (Italian Edition)?
  • Pouvoir et gouvernement dentreprise (French Edition)?

We downloaded their timelines and looked into their filter bubbles. Once again, we were surprised by what we found: This result does suggest the existence of a filter bubble — albeit on the other side. In the case of the Fressgass story, the figure is even lower: So these results do indicate a filter bubble, but in the other direction. To sum up, the filter bubble effect insulating fake news disseminators against corrections is negligible, whereas the converse effect is much more noticeable. No, according to our examples, filter bubbles are not to blame for the unchecked proliferation of fake news.

On the contrary, we have demonstrated that while a filter bubble can impede the dissemination of fake news, it did not inoculate users from being confronted with the correction. We are not dealing with a technological phenomenon. The reason why people whose accounts appear within the red area are spreading fake news is not a filter-induced lack of information or a technologically distorted view of the world.

They receive the corrections, but do not forward recurring topicsthem, so we must assume that their dissemination of fake news has nothing to do with whether a given piece of information is correct or not — and everything to do with whether it suits them. His theory was that people tend to have a distorted perception of events depending on how much they clash with their existing worldview. When an event runs counter to our worldview, it generates the aforementioned cognitive dissonance: Since the state of cognitive dissonance is so disagreeable, people try to avoid it intuitively by adopting a behavior psychologists also call confirmation bias — this means perceiving and taking seriously only such information that matches your worldview, while disregarding or squarely denying any other information.

In this sense, the theory of cognitive dissonance tells us that the red cloud likely represents a group of people whose specific worldview is confirmed by the fake news in question. To test this hypothesis, we extracted several hundred of the most recents tweets from our Twitter user pools, both from the fake news disseminators and from those who had forwarded the correction, and compared word frequencies amongst the user groups.

Wissenschafts-Posse: Ahnungslose Chemiker entdecken Verbindung zum zweiten Mal

If the theory was correct, we should be able to show that the fake news group was united by a shared, consistent worldview. We subjected a total of In descending order, the sixteen most important terms were: The word size corresponds to the relative frequency of the terms. Certain thematic limitations become obvious at first glance. The narrative being told within this group is about migration, marked by a high frequency of terms like Islam, migrants, refugees, Pegida, Syrians. A manual inspection of these tweets showed that the recurring topics are the dangers posed by Islam and crimes committed by individuals with a migrant background, and refugees in particular.

A second, less powerful narrative concerns the self-conception of the group: This new law regulating speech on Facebook and Twitter is obviously regarded as a political attack on freedom of speech on the right. The extent of thematic overlap amongst the spreaders of fake news is massive, and becomes even more apparent when we compare these terms to the most common terms the correction tweeters used:.

First of all, what is noticeable is that the most important terms used by correctors are not politically loaded. It is more about media brands and news coverage in general: Nine out of 16 terms are but a general reference to news media. The remaining terms, however, do show a slight tendency to address the right-wing spectrum politically. Donald Trump is such a big topic that both his first and his last name appear in the Top All of this would seem to confirm the cognitive dissonance hypothesis.

Avoidance of cognitive dissonance could explain why a certain group might uncritically share fake news while not sharing the appropriate correction. When comparing the two groups in both examples, we already found three essential distinguishing features:. In short, we are dealing with two completely different kinds of group. Whenever differences at the group level are so salient, we are well advised to look for an explanation that goes beyond individual psychology.

Cognitive dissonance avoidance may well have a part in motivating the individual fake news disseminator, but seeing as we regard it as a conspicuous group-wide feature, the reasons will more likely be found in sociocultural factors. This again is a subject for further research. In fact, there has been a growing tendency in research to embed the psychology of morals and hence, politics within a sociocultural context. Drawing on a wealth of research, Haidt showed that, firstly, we make moral and political decisions based on intuition rather than reasoning, and that, secondly, those intuitions are informed by social and cultural influences.

Humans are naturally equipped with a moral framework, which is used to construct coherent ethics within specific cultures and subcultures. We have a tendency to determine our positions as individuals in relation to specific reference groups. Our moral toolbox is designed to help us function within a defined group. When we feel we belong to a group, we intuitively exhibit altruistic and cooperative behaviors.

With groups of strangers, by comparison, we often show the opposite behavior. We are less trusting and empathetic, and even inclined to hostility. Haidt explains these human characteristics by way of an excursion into evolutionary biology. From a rational perspective, one might expect purely egoistic individuals to have the greatest survival advantage — in that case, altruism would seem to be an impediment.

Ever since humanity went down the route of closer cooperation, sometime between Or perhaps we should say, it was tribalism: The basic tribal configuration not only includes altruism and the ability to cooperate, but also the desire for clear boundaries, group egotism, and a strong sense of belonging and identity. These qualities often give rise to behaviors that, as members of an individualistic society, we believe we have given up, seeing as they often result in war, suffering, and hostility.

However, for some time there have also been attempts to establish tribalism as a positive vision for the future. From punk to activist circles, humans feel the need to be part of something greater, to align themselves with a community and its shared identity. In the long run, Maffesoli argued, this trend works against the mass society. There are more radical visions as well. In his grand critique bearing the programmatic title Beyond Civilization, the writer Daniel Quinn appeals to his readers to abandon the amenities and institutions of civilization entirely and found new tribal communities.

In , Seth Godin pointed out how well-suited the Internet was to the establishment of new tribes. Indeed, his book, boldly titled Tribes: Isao Yamaguchi war so stolz: Immer wieder werden alte Entdeckungen unwissentlich neu entdeckt. Yamaguchis und Mengers Arbeiten schienen ganz normale Entdeckungen im Wissenschaftsbetrieb zu sein.

Trotz Ruhestand verfolgt er noch immer eifrig die Fachpresse. Ihm kam die Entdeckung ziemlich bekannt vor. Diesen Reaktionstyp zur Gewinnung des besonderen Annulens mit den beiden Stickstoffatomen war keineswegs neu. Es handelte sich um die so genannte Zincke-Reaktion - benannt nach dem deutschen Chemiker Theodor Zincke, der von bis lebte. Ganze Jahre vor Yamaguchi. So wurde der Emeritus zum Detektiv: Otto Hahn, Entdecker der Kernspaltung, promovierte bei ihm.

Bis erschienen die "Annalen", danach wurden sie mit einigen anderen Magazinen zum "European Journal of Organic Chemistry" vereint.