bionic hand and human hand finger pointing

AI Widening the Gap between Humanities and STEM Worlds

bionic hand and human hand finger pointing
Photo by cottonbro studio on

For the first time in my college career, last semester, I enrolled in no Computer Science classes as a Computer Science concentrator. My evening d-hall office hours and weekly problem sets yielded to readings and discussions about Asian American Literature, K-12 Education in the US, and international development. 

It threw me into the experience of the other half of Harvard students. At first, I felt like my classes weren’t real, without the regular satisfaction of solving problems that were incomprehensible just a week ago. Instead, we read book excerpts and talked about ideas. I stopped crossing paths with some of my friends who I’d typically see in the SEC or office hours, and I felt suddenly very far removed from the bubble of STEM concentrators across the river. 

On campus, the STEM humanities divide is a longstanding tenet of the social terrain. Students often introduce themselves with their major, qualifying that they “could never do Computer Science” or “haven’t read a book in years”. Clubs and preprofessional groups generate friend groups around shared passions, whether it’s coding, business, politics, or medicine. 

You’d be hard pressed to find people passionately discussing the ethical implications of technology from both sides of this divide. Usually, collective viewpoints sway towards one extreme or the other. Even for the numerous Harvard students with interdisciplinary interests, distinct attitudes dominating STEM or humanities conversations. For example, CS concentrators commonly take EthiCS modules lightly, which bring up ethical considerations relevant to each CS class. Humanities majors who broach STEM subjects might opt to take a class specifically for humanities, rather than a “harder” problem set class. 

It’s no surprise that reactions to new generative AI technology are split as well, roughly between technology enthusiasts and outraged artists and writers. Since Dall-E 2 and GPT-3 were released last year, the revolutionary advancements in high-fidelity AI-generated art and writing have sparked both enthusiasm and backlash. Technologists are bullish on the almost magical power of AI to save human resources, so they can focus time on obviously more worthwhile things like coding. Meanwhile, vehemently opposed artists, writers, skeptics wonder rightfully what human flourishing is threatened by blind faith in AI.

What Technologists Can Learn From Humanities

On the one hand, technology enthusiasts are quick to conclude that generative AI means replacing human work. If AI can create art, they conclude, human artists aren’t necessary anymore. Twitter users and headlines proclaimed the “death of the college essay” and “death of artistry”. Some business owners have said they’ve already started culling jobs using ChatGPT in place of human workers.  

The underlying assumption of these sweeping claims is that creativity and writing aren’t worth our time — in other words, the output of humanities fields can be outsourced to computers. Knowing how generative AI tools work quickly reinforms our perspective. AI writing tools are simply creating the most common or likely text based on everything they’ve seen on the internet, which is hardly a recipe for originality and creativity. Similarly, Dall-E depends on the work of millions of human artists it’s trained on, and it’s not equipped to generate new art styles. We can’t separate generative art and writing from the human ideas behind them.

A closer look reveals that generative AI works best as a human aide and tool for lower level tedious tasks. ChatGPT’s range of expression is limited mainly to texts it has read on the internet and phrases approved by other human readers, making it obviously limited if your goal is to communicate novel ideas, craft a personal response, or even write a factually correct paper. ChatGPT is a remixing machine, which is helpful for repetitive tasks like emails or answer customer FAQs, which humans often already use templates for, but not tasks requiring higher order thinking and creativity. 

In the real world, generative AI is starting to be employed by the Royal Bank of Canada to streamline engineers’ workflows and the Kansas Health System recently released a generative AI app to automatically fill out patient medical charts. Similar examples are available across industries, but one commonality is that human oversight is needed to fact-check AI generated content. 

We aren’t removing humans from the equation anytime soon. With this in mind, technologists should ask themselves: how will humans interact with generative AI? Designing and implementing AI tools will require philosophical frameworks (what does it mean for a computer to understand?), understanding of language to generate effective prompts, and an inspection of social contexts that AI operates in. ChatGPT already has safeguards in place if you intentionally try to ask it biased questions, and the risks of bad actors will only increase. We need big picture, humanistic thinking.

What Humanities Can Learn From Technology 

Artists and writers have come to view generative AI as an adversary threatening their jobs and societal relevance. When AI generated art won a digital art competition in Colorado, artists were outraged, and (I think) rightfully so, as most AI art models were trained on digital artists’ work without “without the creator’s knowledge, compensation or consent”. Machines mimicking art, often down to the exact artist’s style, without the painstaking hours and creative energy seems like a recipe for the “death of artistry”, as one twitter user put it.

But this view stems from the same flawed view of AI as above. Dall-E 2 is trained on the work of human artists, and it will quickly become obsolete if it’s not continually updated with relevant human art styles, otherwise, it wouldn’t be able to generate art “in the style of Andy Warhol” or “in the style of Claude Monet”. In my opinion, the biggest problem is the lack of rightful compensation to the artists whose work is emulated by AI, and artists may have to dive into the world of NFTs and data privacy issues to make their case with technologists.

NFTs, data privacy, and to some extent, the metaverse are hot fields in technology that have potential to benefit artists. Yet I’d worry that artists would shy away from active participation in conversations of how art will be monetized in the future for some perceived lack of technical expertise, similar to how humanities majors often avoid highly technical classes. In these spaces, cross-collaboration is essential to building relevant, sustainable solutions.

Additionally, AI has potential as a novel, creative tool or medium. It can be used as a jumping off point for brainstorming or be incorporated in creative ways. After creating the well-known Dall-E generated Cosmopolitan cover, its creator tweeted, “the more I use #dalle2, the less I see this as a replacement for humans, and the more I see it as tool for humans to use – an instrument to play.” If artists could approach AI art with curiosity rather than anger (even rightful anger), they would open countless opportunities for new creative work.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

You don't have permission to register