Skip to main content

Artificial Intelligence (AI) might sound like something from a futuristic sci-fi movie, perhaps with an army of robots plotting to take over the world. But the reality is far more complex, exciting, and yes, sometimes terrifying. From its humble beginnings to its current status as one of the most powerful forces in technology, AI has been on quite a ride. 

So, let’s take a trip down memory lane to explore the fascinating (and occasionally rocky) journey of AI. We’ll start with the ancient days of the 1950s when the term “AI” was just a gleam in the eye of a few ambitious scientists, and we’ll end with today’s jaw-dropping advancements in generative AI. 

Grab a coffee, this is going to be a fun (and slightly sarcastic) ride. 

From Turing’s vision to AI’s birth in the 1950s 

Let’s go all the way back to 1950, when Alan Turing, the British mathematician and computer science pioneer, published his famous paper “Computing Machinery and Intelligence.” In this paper, he asked the rather ambitious question, “Can machines think?” Turing proposed what would later be known as the ‘Turing Test,’ a specific evaluation where a human judge engages in natural language conversations with both a human and a machine, without knowing which is which. If the judge cannot reliably distinguish between them, the machine passes the test (Turing, 1950). 

A few years later, in 1956, a bunch of brainy folks (including John McCarthy) decided it was time to make AI a formal thing. They dubbed it “Artificial Intelligence” at the Dartmouth Conference, which, let’s face it, sounds like the sort of place where you would go if you were really into robots and thinking deeply about what constitutes human cognition. McCarthy’s definition? AI is “the science of making machines do things that would require intelligence if done by humans” (McCarthy et al., 1955). Simple in theory. In practice, not so much. 

The first AI winter: A reality check in the 1970s 

Fast forward to the 1970s, and the AI enthusiasts were starting to struggle. Turns out, creating a machine that can actually think (or at least pretend to) was a lot harder than just talking about it. Researchers encountered significant limitations in early AI systems, which struggled with common-sense reasoning, understanding context, and generalising beyond narrowly defined tasks. 

The first ‘AI Winter’ began in the mid-1970s following specific events, particularly the influential Lighthill Report (1973) in the UK, which criticised the failure of AI to achieve its grandiose promises (Lighthill, 1973). Additionally, DARPA cut funding for exploratory research in AI. Significant reductions in funding due to unmet expectations and technical limitations of the computing hardware available at the time meant progress waned. 

The rise of machine learning and the breakthrough of deep learning 

But wait, it wasn’t all doom and gloom. By the 1980s and 1990s, AI research shifted focus to machine learning approaches. Rather than manually programming machines with explicit rules, researchers developed algorithms that could learn patterns from data. Early machine learning methods like decision trees, support vector machines, and neural networks laid the groundwork for later developments. Think of it as the AI equivalent of letting a toddler figure things out by making mistakes (but with far fewer tantrums and far more algorithms). 

Then came deep learning, which sounds like the name of an obscure indie band but is actually a subfield of machine learning that uses neural networks, layer upon layer of interconnected “neurons” that process information like a (very fast) brain. In 2012, a deep learning model called AlexNet outperformed traditional computer vision models at the ImageNet competition, which was basically like the Oscars of image recognition (Krizhevsky et al., 2012). This win was the equivalent of AI finally being handed the key to the city and told, “Alright, go ahead, do something cool with this.” 

Fast-forward a bit, and deep learning started powering everything from self-driving cars to smart assistants and even AI-generated art. But we’re getting ahead of ourselves… 

AI hits the headlines 

Alongside research breakthroughs, AI started winning public attention with some headline-grabbing victories. In 1997, IBM’s Deep Blue defeated world chess champion Garry Kasparov, proving AI could master strategic games (Campbell et al., 2002). Then, in 2016, DeepMind’s AlphaGo beat Go grandmaster Lee Sedol, at a game so complex it was long thought unwinnable by machines (Silver et al., 2016). These wins weren’t just PR stunts; they marked the moment the world started to take AI seriously. 

The transformer architecture and the emergence of Generative AI 

Now, it’s time to talk about the transformer architecture, which is the backbone of some of the most cutting-edge natural language processing models we use today. In 2017, the team at Google introduced a game-changing innovation that allowed AI to process entire sentences or even paragraphs of text all at once, rather than word-by-word, and establish the significance of connections between words far apart in the text (Vaswani et al., 2017). For the first time, machines could truly “understand” context and nuance. Not perfectly, but certainly better than before. 

This leap paved the way for the rise of generative AI, which sounds like something out of a sci-fi novel. Imagine a machine that doesn’t just regurgitate information, but creates entirely new content, whether it’s writing an essay, generating code, or even crafting a song. These AI systems were trained on massive datasets, learning patterns, structures, and language nuances. The result? GPT (Generative Pretrained Transformers), which can create human-like text based on simple prompts. 

The generative AI revolution: When machines got seriously creative 

So, what happened after 2017? Buckle up, this is where things get wild. Those transformer systems didn’t just sit around; they fuelled a new generation of AI that could create content. 

In 2018, OpenAI released GPT-1, a modest but promising text generator. That same year, Google introduced BERT, an AI that could better understand entire sentences, making search engines and chatbots a lot more coherent. 

By 2019, GPT-2 was so good at writing human-like text that OpenAI initially kept it under wraps, worried about its potential misuse. And then came 2020’s GPT-3, an AI that had effectively read the internet and could write essays, stories, and code with scary fluency. 

Between 2021 and 2022, things got even more surreal. Models like DALL·E and Midjourney began turning phrases like “a cat riding a dinosaur through space” into astonishingly good images. Artists, marketers, and meme lords took notice. 

Then in late 2022, ChatGPT put this power in everyone’s hands. Suddenly, your aunt was asking AI to write emails, recipes, or even poems for the dog’s birthday. 

By 2023–2024, AI could understand both text and images, create videos from descriptions, and handle tasks previously reserved for sci-fi. The conversation shifted from “Can AI do this?” to “Should AI do this?” 

Welcome to the age where the lines between human and machine creativity are blurring. Exciting? Absolutely. Scary? Maybe a little. Game-changing? No question. 

Understanding large language models: Knowledge, not intelligence 

Let’s talk about the big elephant in the room: large language models (LLMs). These are the AI systems like GPT-3 and GPT-4, which are capable of churning out text that sounds eerily human. They don’t think like humans, but they can write like them, which can be a bit unsettling if you think too hard about it. 

Despite all their impressive outputs, LLMs don’t actually “understand” the content they generate. They’re not sitting there contemplating the meaning of life or getting into deep philosophical debates (yet). Large language models like GPT-3 and GPT-4 are statistical systems trained on vast text corpora using self-supervised learning techniques. While they can generate remarkably coherent and contextually appropriate text, they operate by predicting the most likely next tokens (words, or parts of a word) in a sequence based on patterns in their training data, rather than through reasoning or understanding in the human sense (Brown et al., 2020). 

As Sundar Pichai, the CEO of Google, put it, AI is a tool that will be “even more profound than [fire or electricity or the internet]” (Pichai, 2023). And while that’s a bold claim, it does hold some truth. If the future of AI is anything like its past, we can expect it to continue to transform our world in ways we can barely begin to understand. 

The challenges of bias and ethics in AI 

But here’s the thing: with great power comes great responsibility (going full Spider-Man on you). As AI becomes more integrated into our lives, addressing bias, fairness, and ethics becomes crucial. AI systems are only as good as the data they’re trained on, and if that data is biased (which it often is), the AI can produce biased results. 

As Stephen Hawking wisely warned, creating AI could be “the biggest event in the history of our civilization. Or the worst” (Hawking, 2014). Let’s hope for the former, but it’s a bit like driving a car without knowing exactly how the brakes work. We all need to keep our eyes on the road, and on AI ethics if we want to avoid a major crash. 

Looking ahead: The future of AI 

So, what does the future hold for AI? AI will continue to evolve at a breakneck pace, and we’ll likely see it permeate even more aspects of our lives, from healthcare and finance to entertainment and beyond. Kai-Fu Lee, one of the leading voices in AI, put it succinctly when he said, “I believe AI is going to change the world more than anything in the history of humanity. More than electricity” (Lee, 2018). That’s a bold statement and it will be interesting to look back and see if this turns out to be true as, for this to be the case, we need to see progression beyond transformers, beyond GPT models. We need more breakthroughs and who knows how long that will take. 

The AI Revolution is just beginning 

From Turing’s early musings to the era of generative AI, AI has come a long way and it’s only just getting started. We’re living in the age of AI, where the lines between what’s possible and what’s impossible are being blurred at an alarming rate. Sure, there are challenges ahead, but there are also countless opportunities to leverage AI in ways that can drive innovation, efficiency, and, dare we say it, progress. 

At BrightSG, we’re embracing the AI revolution, and we’re committed to helping businesses unlock its full potential. We might not have flying cars just yet, but we’re moving in the right direction.