In Steven Spielberg’s 2018 film Ready Player One, we are presented with a dystopia where the vast majority of the population escape their reality through a VR wonderland known as “The Oasis”. Australian actor Ben Mendelsohn plays the nefarious Sorrento, who is attempting to control this world using his own private army.
While the technology may not yet exist for this kind of fully integrated virtual world, the reality is the groundwork is already being laid for something very similar. With the rebranding of Facebook and its subsidiaries to Meta, Mark Zuckerberg has laid a huge claim on this potential future. While the technology has undeniably huge potential, the reality is the past 15-20 years of history indicate that while a small handful of people use these technologies for betterment, the vast majority tend to use it to engage our worst impulses. Facebook is used as a platform to troll people until they either agree with your point of view or leave in anger and disgust. Others use it to conduct extra-marital affairs. It has the power to change minds and sway populations, as we have seen when the 2016 American election was decided.
Facebook’s Rise to Prominence & Fall From Grace
And that’s just the users. The company – from its inception to the present day – have not conducted itself in what could be called a “righteous” manner. The events surrounding Zuckerberg’s rise – both a matter of public record and effectively dramatised in David Fincher’s brilliant film The Social Network – are positively Machiavellian, with him willingly stepping over people who helped him to rise from a nobody computer programmer at Harvard to one of the richest and most influential men on the planet. In many ways, this story of one-upping and betrayal informed and defined Facebook’s corporate culture. The company’s meteoric rise to prominence – often through means of either buying out or destroying rivals – was quick, brutal and absolute.
The practices of the company have been called into question by various investigative bodies, from news organisations to governments. Perhaps most prominent among these is the Cambridge Analytica scandal, where internal documents leaked to the public that the personal information logged on Facebook is sold to companies, who in turn use algorithmic data gathered by Facebook and other companies to manipulate the feeds of consumers. This was particularly problematic as the data was used by various political campaigns to target potential voters.
In particular, Donald Trump’s successful 2016 run for the presidency used the Cambridge Analytica data to build psychographic profiles, determining users’ personality traits based on their Facebook activity. The campaign team used this information as a micro-targeting technique, displaying customized messages about Trump to different US voters on various digital platforms. From alleged use by the Russian hacking group that was also helping Trump’s bid, to the targeted advertising that helped make Brexit a success in the UK, the data gathered by Cambridge Analytica has been linked to several world-altering events.
Frances Haugen Exposes the Truth
A more recent leak from inside the company deals with the explosive testimony from former Facebook employee Frances Haugen. Frances has presented documents to the Wall Street Journal that were funded by the company itself that categorically state the detrimental effect social media has on its most vulnerable users – kids under 18. These documents show that Instagram is toxic for teenage girls, with issues pertaining to body image and self-esteem being at the centre of this controversy. During her recent appearance at a Senate Sub-Committee hearing into these findings, she emphasised the importance of the algorithms she worked on during her tenure at Facebook. They are now the primary drivers of its business model. Algorithms dictate which content users see on their feeds; algorithms are relied upon for catching harmful content; algorithms choose which ads users see on a minute-to-minute basis.
The problem is that these powerful algorithms are nowhere near perfect. Facebook’s algorithms cannot catch content that might be harmful to teens and children. They cannot catch underage users until it’s too late. They often, in an attempt to keep users engaged, serve them downward-spiralling content like posts that promote eating disorders. Because the algorithms are not designed to make using Facebook as helpful or as wholesome as possible, they’re designed to keep users hooked.
A Virtual World Controlled By Facebook
Given the unscrupulous nature of the way the company is run, coupled with the tendency for users to mistreat technology and corrupt its intended purpose, the notion of an augmented reality space that is controlled by a man like Zuckerberg – and the aforementioned algorithmic thinking – is a truly frightening one. The notion of the “bubble” (i.e. the algorithm dictating what you do and don’t see in your feed based on factors like your personal, religious and political beliefs) is not a new one, but the idea that through the AR glasses people might be able to filter out things like fertility clinics or gun shops (depending on which side of the political fence the user is on) means that said bubble would begin to affect the way we perceive reality itself, rather than the one presented on our screens.
Experts fear that the Metaverse could warp and corrupt reality as we know it, allowing third parties to dictate what we do and don’t see – creating personalised versions of reality – and making political polarisation even worse. It has the potential to propagate misinformation, further fracturing what we as a species agree upon as factual and real.
What The Experts Think
Louis Rosenberg, 30 year veteran of AR development and the CEO of Unanimous AI stated,
“Instead of us just kind of being in our own information bubbles, we’re going to be segmented into our own custom realities.”
Shawn Frayne, CEO of holographic tech startup Looking Glass Factory assures us that any negative feelings we may hold about the metaverse are not unfounded,
“Folks should be worried. If you think Facebook on your phone has been bad for democracy, think about your entire field of view controlled by a company like that.”
A World Without Rules
Along with these potential harmful aspects of the metaverse, there is also the problematic nature of using advertising in this space. The same algorithms that are used to target ads to users based on their activity on social media could be harnessed to create avatars who appear real but are in fact programmed AIs who are custom-designed to sell you something. A truly horrifying and dystopian concept.
At the beginning of the article, I brought up Ready Player One. While it is just fiction, the reality is that the Sorrento of the real world – Mark Zuckerberg – would be in total control of this potential new space. Instead of utilising a private army, he would simply be able to use the power he has already garnered over the past 15 years running Facebook (now Meta). He would essentially be the dictator of this new reality. Without regulation and oversight – which is already a huge issue in social media itself – the virtual world would be subject to the same problems, only amplified in massive and unpredictable ways.
Subscribe to FIB’s Weekly Breaking News Report for your weekly dose of music, fashion and pop culture news!