New to AI?
Something a bit different - introductory AI resources.
Spending time with family and friends over summer, one question I constantly got asked was
“I’m new to AI and want to learn more about it. Where do I start?”
This is from people who have been increasingly seeing AI in the news and want to know what it can do for them, if they should care, and if they should be worried. Maybe this is you.
Motivated by my inability to give a good answer to this question, I’ve decided to collate some non-technical resources I think are good.
This post is separated into three categories:
Use ChatGPT and see for yourself!
The state of AI
AI safety
Use ChatGPT and see for yourself!
Learning-by-doing is the best way to learn! There are countless “an introduction to” books and videos out there, e.g. The ChatGPT Handbook for Beginners, but these are not worth the paper they are written on.
Instead, make a ChatGPT account and get playing.
I’ve noticed that the number of available products can sometimes act as a barrier to getting started. Should you use ChatGPT, Gemini, Claude, Grok, Meta AI, Perplexity, etc? Choice overload! The answer is simple: ChatGPT is the best for 95% of things you will care about.
To see the benefits, ask the model about something complex and meaningful to your life. For example, I have a friend, Craig, who is a keen golfer and often organises trips abroad with his friends. With this in mind, I demonstrated ChatGPT’s abilities by asking the model:
“I’m currently planning a golf trip to Adelaide, Australia, for me and 7 of my friends. We want to go sometime around February 2026 for around a week. I want you to plan the entire trip and report back to me. The main thing to think about is the golf club. We want somewhere with good accommodation, nice restaurants for the evening and a couple of other good courses nearby. Do a full search and produce a full day-by-day itinerary, an accompanying fully costed budget, and a booking checklist.”
The results were impressive. GPT-5 searched for over 5 minutes and considered challenges we hadn’t thought about: minibus hire, visitor green fees, and whether we’d need visas to go to Australia.
Think of a question that is equally hard but relevant to your life. Give it a go and you’ll immediately see the power.
The state of AI
Next, I’ve compiled a list of podcasts, blogs, and books I think are good introductions to the field.
Ethan Mollick’s “One Useful Thing”. This blog is a beginner-friendly introduction to AI, with a particular focus on business. The posts are short and easy to read. Look at some of the most popular posts, e.g. Using AI Right Now: A Quick Guide, or 15 Times to use AI, and 5 Not to. It’s nothing revolutionary, but it’s a practical view of the latest capabilities.
Theo Von’s Podcast with Sam Altman. This is weirdly good! Theo Von’s an American comedian and generally not someone I would normally listen to, but I got recommended the episode where he interviews Sam Altman, the OpenAI CEO, and it is strangely good. Theo Von asks the questions the everyday man wants to ask, and Altman gives generally decent answers. There are a few areas where the podcast is light – Altman gives fuzzy answers on AI safety questions and unfortunately doesn’t get pushed – but overall this is a worthwhile listen!
The OpenAI Podcast. These episodes are a mixed bunch at best. Less insightful content, more OpenAI PR, but nevertheless a view into the capabilities of ChatGPT, where the scientists expect the technology to go next, and how OpenAI leadership think about future society. The catch is that Sam Altman is the master of schmooze, and other senior employees like Mark Chen (Chief Research Officer) have spent significant time at the Altman school of media training. This makes it painful at times. A small bonus here is that it is hosted by someone called Andrew Mayne — are we the only two Maynes in AI?
Google DeepMind: The Podcast. Hannah Fry is quite possibly the coolest person on earth. She’s the Professor of Public Understanding of Mathematics at Cambridge, making a career bringing maths to the general public with books on topics such as Monopoly and dating. She is therefore the perfect person to host DeepMind’s podcast, distilling complex AI research down into jargon-free episodes. Unlike the OpenAI podcast, this is a deep dive into the latest scientific breakthroughs, rather than pure PR. Of course, it is ultimately still Google trying to influence public perception, but they’re much better at hiding it. I like this a lot.
The State of AI Report (PowerPoint). Venture capital firm Air Street Capital make an annual report covering the main developments in AI over the past year. It bridges general capabilities, business, and safety. It’s a long read, but fairly easy to digest without too much background knowledge. This is especially relevant if you’re interested in AI for business. There are many copycats out there, e.g. McKinsey’s version, but you can’t beat the original.
The Coming Wave (book). I haven’t personally read this, but have heard good things. It’s a beginner-friendly introduction to AI and what many scientists expect to happen in the coming years. Written in 2023 by (somewhat controversial) DeepMind co-founder Mustafa Suleyman, it outlines the promise and peril of the new technology. His accompanying interview on The Rest is Politics: Leading is a good summary of the main points.
Dwarkesh Podcast (advanced). This is the best podcast on the internet, in my opinion. Dwarkesh Patel is a seriously good interviewer whose USP is doing incredibly thorough research before speaking to his guests. His interviews with top AI players like Ilya Sutskever (then OpenAI Chief Scientist), Dario Amodei (Anthropic CEO), and Shane Legg (DeepMind co-founder) are must-listens. Episodes assume significant background knowledge, but are worth tackling once you’ve covered some of the other resources.
AI Safety
As an AI safety researcher, I am legally obliged to also recommend my favourite resources here.
Geoffrey Hinton’s Diary of a CEO episode. This is really good! Hinton is a Nobel Laureate widely credited with developing many of the key deep learning techniques underpinning modern AI. I had the chance to speak to him during his 2023 Romanes Lecture at Oxford, and he (infamously in my lab…) roasted my PhD research direction. (Two years and a change of topic later, I’ve concluded he was quite possibly somewhat right… maybe). Anyway, since leaving Google, he’s become an AI safety evangelist. He’s an excellent speaker and does a great job at explaining complex safety concerns in plain language. I’m not Steven Bartlett’s biggest fan, but this is a fantastic podcast episode that covers the main areas of AI safety and why we should all care about it.
The Alignment Problem (book). An excellent, beginner-friendly book on the fundamental problem of AI safety (often billed as the alignment problem) and why it is so difficult. The book is a bit old now, and doesn’t cover the latest safety concerns and research; however, it is still well worth a read.
AI 2027. AI safety ranges from people who are concerned about serious, but non-existential risks, like racial or gender bias, to those concerned about humanity’s extinction. AI 2027 falls in the latter category. Led by ex-OpenAI researcher Daniel Kokotajlo and legendary blogger Scott Alexander, this outlines their “best guess” about how superhuman AI might develop over the coming years. I personally view this as a very extreme forecast; however, Daniel accurately predicted much of the current paradigm in 2021, so he’s worth taking seriously. The accompanying Dwarkesh Podcast episode and YouTube explainer are friendly ways to understand the main points.
If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI (book). Moving further towards the extreme, you arrive at Eliezer Yudkowsky, the original AI safety philosopher and activist. Yudkowsky’s latest book outlines why he believes advanced AI will end humanity. I’ve not read the book yet (it comes out this week), but I’ve seen good, mixed, and bad reviews. I have read The Problem, a blog post by the same authors, which similarly introduces AI as an extinction risk. I find the story interesting, but struggle with some of the logic. No doubt Yudkowsky has preempted all these critiques in the book, so I look forward to reading it.
These are all intended for consumption by the general public. There are lots of other resources I recommend for AI researchers, so I might do another post in time.
