No university campus is complete without a fierce rivalry between STEM and humanities students — and it’s fair to say that the scientists have been winning the competition for a long time now. Artists and thinkers may have dominated during the Renaissance, but the Industrial Revolution has been the era of the tech worker. Apple’s market cap is bigger than 96% of world economies, and digitally transformed enterprises now make up almost half of global GDP.
But as technology achieves more milestones and reaches a certain critical mass, I think that humanities are about to make a long-awaited comeback. Technological innovation — especially artificial intelligence — is crying out for us to ponder critical questions about human nature. Let’s look at some of the biggest debates and how disciplines like philosophy, history, law and politics can help us answer them.
The potentially ominous or destructive consequences of artificial intelligence have been the subject of countless books, films and TV shows. For a while, that might have seemed like nothing more than fear-mongering speculation — but as technology continues to advance, ethical debates are starting to seem more relevant.
As AI becomes capable of replacing an increasing number of professions and many people are left jobless, it raises all kinds of moral dilemmas. Is it the role of the government to offer a universal basic income and completely restructure our society, or do we let people fend for themselves and call it survival of the fittest?
Then there’s the question of how ethical it is to use AI to enhance human performance and avoid human failure in the first place. Where do we draw the line between a “human” and a “machine?” And if the lines become blurred, do robots need the same rights as humans? The decisions we make will ultimately determine the future of the human race and could make us stronger or weaker (or see us eliminated completely).
One of the AI advances raising eyebrows is Google‘s Language Model for Dialogue Applications (LaMDA). The system was first introduced as a way of connecting different Google services together, but ended up striking debate about whether the LaMDA was, in fact, sentient — as Google engineer Blake Lemoine claimed after witnessing how realistic its conversations were.
Ultimately, the general consensus was that Lemoine’s arguments were nonsensical. The LaMDA was only using predictive statistical techniques to hold a conversation — just because its algorithms are sophisticated enough to seemingly have a dialogue, it doesn’t mean the LaMDA is sentient. However, it does raise an important question about where things would stand if a theoretical AI system was able to do everything a human can, including having original thoughts and feelings. Would it deserve the same rights humans have?
The debate over what exactly we should see as human is nothing new. Back in 1950, Alan Turing created the Turing test to determine whether a machine can be sufficiently intelligent and similar enough to humans that we can submit that machines have some level of “consciousness.”
However, not everyone agrees. The philosopher John Searle came up with a thought experiment called “Searle’s Chinese room,” which says that the program of a machine that only speaks in Chinese could be given to a person that doesn’t speak Chinese in the form of cards. That person would then be able to follow the instructions on the cards to fool someone outside of the room into thinking they could speak Chinese if they communicated through a slot in the door; but clearly, this isn’t the case.
According to Lemoine, Google isn’t willing to allow a Turing test to be performed on the LaMDA, so it seems Searle isn’t alone in his reservations. But who is going to settle these issues?
As more of our lives become enriched by AI, more of these questions will arise. 80,000 Hours, a nonprofit run by Oxford academics that focuses on how individuals can have the greatest impact in their careers, has highlighted positively shaping the development of artificial intelligence as one of the most prominent issues the world faces right now.
And although some of the work is likely to focus on research into technical solutions for how to program AI in a way that works for humans, policy and ethical research are also set to play a huge role. We need people who can tackle debates, such as which tasks humans have fundamental value in performing and which should be replaced by machines, or how humans and machines can work together as human-machine teams (HMTs).
Then there are all the legal and political implications of a society filled with AI. For instance, if an AI engine running an autonomous car makes a mistake, who is responsible? There are cases to argue for the fault being with the company that designed the model, the human the model learned its driving from or the AI itself.
For questions such as the last one, lawyers and policymakers are needed to analyze the issues at hand and advise governments on how to react. Their efforts would also be complemented by historians and philosophers who can look back and see where we’ve come short, what has kept us going as a human race and how AI can fit into this. Anthropologists will also have plenty to offer based on their studies of human societies and civilizations over time.
The rise of AI may happen faster than anyone could anticipate. Metcalfe’s Law says that every additional person added to a network doubles the potential value of that network — meaning a network becomes exponentially more powerful with each addition. We’ve seen this happen with the spread of social networks, but the law is a potentially terrifying prospect when we talk about the fast ascent of AI. And to make sense of the issues outlined in this article, we need thinkers from all disciplines. Yet the number of students who obtained humanities degrees in 2020 in the U.S. decreased by 25% since 2012.
As AI becomes a greater part of our daily lives and technology continues to advance, nobody in their right mind would deny that we need brilliant algorithm developers, AI researchers and engineers. But we’ll also need philosophers, policymakers, anthropologists and other thinkers to guide AI, set limits and help in situations of human failure. This requires people with a deep understanding of the world.
At a time when the humanities are largely viewed as “pointless degrees,” and young people are discouraged from studying them, I would contend that there’s a unique opportunity to revitalize them as disciplines that are more relevant than ever — but this requires collaborations between technical and non-technical disciplines that that are complex to build. Either way, these functions will inevitably be performed, but how well will depend on our ability to prepare future professionals in these areas who have both multidisciplinary and interdisciplinary views of the humanities.