When I was given the choice to choose a subject for university, I wanted to do what I was interested in, and liked. In my late teens, I desperately wanted to become a professional screenwriter, and it was that desire that opened up my path into studying film. However, I had also been “into computers” since my dad bought me a Sinclair ZX81, and thus added on a minor in a subject called Humanities Information Technology.
Humanities IT looked at the theoretical, social, and cultural impact of computing. It was practical too; writing code and essays about neural networks, and producing Cubist artefacts with Aldus Pagemaker.
It was interesting to see what that year’s cohort did after they graduated. For the Film graduates, it was often a case of trying to get into the industry, perhaps as a runner, prior to giving up and starting a reliable but less personally interesting career elsewhere.
I got to know some of the University IT staff in my first year, and by the third I was kindly invited to work part-time on the help desk. A combination of this and my own work on the World Wide Web (which was only one year into actually having graphics) led to being transferred into the University’s web team, eventually into a full-time role after graduation. This was exciting and enriching for a multitude of reasons, including the simple logistical one of not having to bother to look for a job elsewhere. I could build my tech skills in making web applications with ColdFusion, while wrangling with Netscape Enterprise Server on a Solaris box.
The upshot is that although such a move led me to pursue a career in what one might now call “digital”, bringing humanities-led topics into this practice hasn’t always been straightforward, but it has worked on a few occasions. In a job several years ago, I brought someone into the digital team because she had an art history background, and so could understand how people visually interpret and process what they see. She went on to be a web marketing superstar.
Perhaps the last time that both technology and the humanities really fused was in the growth of social media. For companies to really get their offering right, they needed to blend technologically-advanced architectures, code, and UI, with the anthropological study of how people behave in given settings, and under given personal contexts and stresses. Facebook’s stumbling growth was perhaps down to an imbalance; it was tech-first under a mission to “connect the world”, without a coherent explanation as to why. This lead to Facebook’s famous culture of “move fast and break things” where just because you can let an uncontrolled, uncontrollable product loose on society without necessarily examining its possible sociopolitical consequences, it doesn’t mean that you should.
We are now entering the next critical mass of technologists and theorists/sociologists coming together.
AI is behind this critical mass for a variety of reasons.
The first is to do with the data that AI systems need to ingest and process. This is not about volume per se but is about where the data sources come from, what’s in them, the biases that they may have, and the impact that their ingestion may offer. This needs researchers, social science practitioners, and those in the wider legal professions to really get on board. We are already aware that AI may become too powerful for engineers to handle alone, and this represents a massive and urgent opportunity for the humanities.
The second is for humanities-based academics and practitioners to consider more deeply about where all this stuff is going. The genie is out of the bottle and while there has been a call to pause developments while we work all this stuff out, it’s unlikely to happen — particularly if you’re running an AI startup and you need to eat sometime this year. Where there are legitimate concerns regarding the rapid development of AI then, as above, now is the time for all good humanities people to come to the aid of the party.
The third is for people with technical and theoretical experience to see this critical mass period as a huge opportunity for them, and for employers to recognise that. Training AI models and platforms requires people experienced in linguistics, communications theory and practice, translation, anthropomorphism, anthropology, and much more: essentially the full array of humanities topics. Even film theorists such as I have opportunities in AI work related to deep fakes, AI-produced video content, and interpretative technologies such as script-to-video and vice versa.
In short, this could be a golden time for the humanities in terms of its relevance and value to technological development and impact. However, needs research bodies, universities, and perhaps governments to promote their value to the AI community, and for the AI community to understand what value they can bring. Some large companies and organisations already “get this”, as they have existing, mature research arms — but not all. And, it’s easy to overlook the social and theoretical impact when your team is focussed on getting your LLM out of the door as quickly as possible to earn exposure.
Humanities: it’s your time.