This was my final presentation for the course I took on data bias this past semester. It can also serve as a kind of jumping off point for explaining some of the issues that I spend most of my time thinking about. That being said, I still have some problems with how I framed things in this.
My main issue with what I said in this presentation is that it might sound like I’m assigning too much agency to tech companies. I think I use the term“techno-oligarchy” in this video. Obviously, I don’t hold them in the highest esteem, but nor do I think they are these all-powerful villains. I say this with the full knowledge that Mark Zuckerberg is building a Bond Villain lair in Hawaii.
The thing is that these billionaires, or pretty much anybody with money or power, will inherently be able to exert more force when it comes to shaping what the future will look like than you or I would be able to. That’s just how the system works. So what can we do to counteract this? What kind of future do we want to live in?
In this video I talk about the pro-innovation bias and explain how I think it developed from a fallacy known as the “appeal to modernity.” Opposing this is another fallacy known as the “appeal to antiquity,” and as they’re both fallacies, it’s hopefully apparent that neither is correct. Both can be exploited politically, though, so it’s incredibly important to at least be aware of them.
I also discuss technological determinism, or the belief that technology is the primary driver of history. Opposing this is the idea of social determinism, or the belief that social structures are the primary driver of history. I don’t think either of these is exactly right. It’s important to remember that while technology can do some amazing things it does not have the agency necessary to choose what the future will hold. At the same time, the societies that we live in can also change rapidly, and the choices we make about technology can affect these changes.
It becomes an issue of threading the needle between modernity and antiquity, and technological and social determinism, because what truly determines the future is the choices we choose to make. Musk or Zuck, or Trump or Bibi, might each be able to exert more power than you or I, but that doesn’t mean we are powerless.
In this video I mention how our sociopolitical economic systems are cybernetic, which means that they adjust or respond to feedback, and so while those with money and power might be able to affect more change on these systems, it’s of the utmost importance that we try to understand what feedback mechanisms exist for us to make changes to them as well. I tend to view democracy as the “least bad” system of governance that we’ve tried. I like that it’s iterative, and that we can vote out the asshats when they do things to harm us.1
I used to think the horseshoe theory was utter bullshit, and I agree with the point Cory Doctorow makes in this post about it being "very, very wrong," but at the same time I’ve also noticed a trend in leftist discourse online towards the type of accelerationism that is common among white supremacists like the boogalo bois. There’s always the chance that what I’ve been seeing is purely the product of troll farms, but it’s also easy to imagine that when a system doesn’t work for you, and you don’t think you can affect change, that you might want to burn the system down.
I’ve said before that I think accelerationists suffer from main character syndrome, and vastly underestimate the scale of suffering that would accompany the large-scale collapse of the current system
At the same time, it’s also important to remember that all of a society is essentially a game of make believe that we’re playing with every other human being. Fundamentally, there is no such thing as a social security numbers. You can’t eat gold, and a job is really only worth however much somebody is willing to pay you.
The issue becomes not falling victim to nihilism. If we want a better society, we need to spend time imagining what that society will be, and work together to build the type of society we want to live in. The mechanisms for change exist, and while we might not be able to affect change on the same level as billionaires, if we organize and convince others to believe in our ideas, we can still make the world a better place.
The future is built on the choices we make and what we choose to believe in.
My ideal form of government is probably some sort of anarcho-communism with strong democratic undercurrents, but I don’t think we’d be able to get to that without either the large=scale collapse of society, or some sort of sci-fi technology, and since I don’t want to see the amount of suffering that would be caused by the former, and doubt the likelihood of the latter, I’m willing to settle for the type of democratic socialism that provides strong safety nets and promotes the fundamental human rights outlined by organizations like the United Nations.
Share this post