Often it is spoken about how big companies collect our data, get to know us through our data and target advertisement based on our data. But with data being everywhere, the pressing question arising is how the data actually shapes us and the world around us. We are not only producing data, but the data creates us.
Text: Vitus Besel
Every now and then when I feel like listening to music, I just open YouTube and click some suggestion on its frontpage, auto-play on, and see where it carries me. It doesn’t matter what is your weapon of choice, may it be Spotify, Tindal or whichever streaming service, you have probably experienced similar situations. Sometimes I am taken to great places where I get to discover new music and bands I didn’t know exist before. On other days I am taken into an infinite loop, that makes me feel like the same songs or at least same monotonous melodies are haunting me again and again and again…
The reason for this lays within the algorithms behind the streaming platforms. They are essentially based on your taste. Where these services create a history of the songs you listen to and then suggest similar songs to you or suggest songs people listened to who also liked the songs you listened to. Essentially, it is very simple: You are a row in Spotify’s massive data base and the algorithm just compares the information that is stored there. Moreover, it creates a loop where it suggests music to you, you listen to the suggestion, it is added to your data row and your next song is chosen based on that again. The algorithm buries you deeper in the music you already like, which creates interesting questions: What if someone without any previous history of listening to music would subscribe to Spotify and start listening? What taste would they develop?
Let’s talk about how we do not only produce data, but how the data produces us.
By now we are used to “our” data being collected. Some despise it, some don’t care and most of us just accepted it as a necessary evil, which we cannot do too much against and whose implications are often not apparent to us. Google, Facebook, YouTube, you name them, collect basically everything we give them, be it explicitly, like where we are what we like to spend our time with, what we like to spend our money on, or explicitly like your political orientation. This data means money to them as they can sell it and use it so that advertisers are able to show you what you want.
In order to get as much data as possible from you, the main task of these services is to keep you involved. Take YouTube for example, you watch a video and it will suggest to you similar videos so that you keep spending time on their platform and they can collect more data. In practice, let’s say someone – possibly without strong opinion – stumbles across a video in which some v-logger explains that foreigners are bad for the economy. The v-logger may use very flimsy and wrong argumentation. However, the platform suggests you similar videos, and if this someone keeps watching them, the conveyed thoughts start to cement for this person by the sheer mass of similar videos. A bubble been has created, the algorithm has created a reality, respectively a mindset in someone. These bubbles can be created for virtually anything, and are surely not exclusive to left or right ideas.
Here’s another example how algorithms create reality: Let’s assume in some city, due to resentments against a minority, people, and also the police, assume that in the district that is predominantly populated by this minority is more crime. Consequently, the police send many more officers on patrol to this district. Now, because there are more police officers in this district, they will find more crime, even if the crime rate is the same as in any other district, just because there are more eyes looking. However, the data now claims that in fact there is more crime, so police officers keep being send to this district in unproportionally high numbers and are therefore not being efficiently used over the whole city. If now the police department started modernizing and decided to have an algorithm determining where to send the police officers, the algorithm would be fed with this data and give exactly the same outcome, because the data says that there is more crime in this district.
What to take away from that? Well first of all, algorithms, and computers in general, are only as smart as, or as neutral as, the humans who have created and operate them. In the previous example the programmer. Moreover, the bubbles in which we move are not consciously created, but they are rather a by-product of the goal of keeping us engaged.
Various platforms have started to address this issue, Twitter started to label tweets that are misinforming and Facebook as well as YouTube indicate for example which kind of media some information is from. However, there is so much content constantly being produced that it is hard for them to keep up. It comes down to the users, to us, having to be critical. It is a collective responsibility we have. Because we need to keep in mind, that this technology also means enormous progress in medical, social and scientific ways. And it cannot simply be put into “use” and “abuse”, but there is this third category, where algorithms just work autonomously, and people have to be mindful of what to do with the algorithm’s outputs.
Often it is nice to be in our bubble. It gives us security, people feel comfortable with what they know, I feel good if Spotify keeps suggesting me Indie music. However, it is my responsibility to deal with other kinds of music every now and then, to broaden my horizon, and even when I go eventually back into my bubble, to at least know what else is out there. And then when I meet someone who lives in the “Metal” bubble or their “Jazz” bubble, I won’t be afraid, I won’t judge them, because I know I live in a bubble and so do they, but we can engage in conversation to evolve together.
I won’t have algorithms create all of my reality, but I will create my own reality with a little help of some algorithms.