Learning and Curation Through Podcasts—and What Happens When You Give Agents a Mouse
Computer use, Podcasts and new ways of learning
Hey Everyone!
There seems to be a near-universal belief in AI circles that agents are the next big thing. Of course, no one fully agrees on what exactly an agent is, but it generally involves an AI acting independently to accomplish user goals.
Learn, Just Like LLMs
The digital world is a reflection of the real world, and if I reverse-map how we learn with how LLM models learn, it’s not too different. When we don’t know much about a topic, we start by gaining knowledge—through consuming information either through conversations, interactions, listening or readings—to build a basic model which acts as a starting point and a step board. Without this basic model in place, there’s nothing to improve upon. Next, we test the model in the real world, observing outputs and reactions - this can be through our discussions with friends and colleagues, or through creating/sharing content on it(Text, Video, Audio). Based on this, we calibrate: increasing certain “weights” and muting others, all while focusing on a specific area. Over time, we can expand to include more diverse perspectives, recalibrating with each round of input and feedback.
One of the most important steps in this process is self-awareness: understanding your strengths and areas needing improvement. Equally essential is having reliable sources to build the foundational model, a live system to test responses, a way to recalibrate weights, and exposure to broader focus areas. Together, these elements help the human “learning LLM” become as robust as possible.
Each part of this system contributes to growth, and missing any one of them can restrict learning, leading instead to the catharsis and hubris of a falsely mastered skill.
Blessed are those who have this system in place and can utilize it becomes an unbeatable learning machine!
NotebookLM - Netflix shareholder’s letter
Podcast content discovery has always been tricky, especially since audio as a learning format comes with time constraints—the longer the podcast, the higher the drop-off. We’re naturally inclined toward text-based content, and I’m constantly juggling more content than I can realistically consume. That’s why the blend of “on-the-go consumption” with podcasts is a lifesaver.
Notebook LLM is essentially the product of this blend, giving us the best of both worlds: listening to engaging conversations around content we’re genuinely interested in. I love going for walks with a podcast, and I’ve always needed a way to get bite-sized insights from things like quarterly reports and shareholder newsletters. Notebook LLM has been a relief. I knew that if this information could be fed to an LLM, it could summarize it—but typical summaries are dry and too straightforward. Notebook LLM, however, curates it in a way that’s pretty darn engaging, which makes a huge difference.
At the risk of sounding too enthusiastic too soon, I genuinely feel this was a missing piece in the content I consume. The output feels truly professional. This might be my first directed podcast where I just had to chose the content to focus and boom was ready for a 15 min walk with the right content to go along.
Try for yourself in this podcast Netflix shareholder’s letter
Claude’s computer use
I tried using the Claude’s computer use and this did feel like a stuff from the future where the prompts were leading to perform multi turn tasks with or no(almost) help and this looked a leap forward from where I see the stuff this is a precursor to the age of agents with a simple instruction it can help carrying out tasks that require dozens, and sometimes even hundreds, of steps to complete.
The goal having reasoning playout a very crucial role when it comes to matching up the Humans, computer use will come out very handy. I can foresee as the models develop more and more on the reasoning layers and and agent can perform task with logic, precision and goal a task handed out - irrespective the length of the task
Ethan Mollick, an associate professor at the Wharton School of the University of Pennsylvania, got a chance to try Anthropic’s agent early. He had it whip up a lesson plan for him while he did other things:
As one example, I asked the AI to put together a lesson plan on the Great Gatsby for high school students, breaking it into readable chunks and then creating assignments and connections tied to the Common Core learning standard. I also asked it to put this all into a single spreadsheet for me. With a chatbot, I would have needed to direct the AI through each step, using it as a co-intelligence to develop a plan together. This was different. Once given the instructions, the AI went through the steps itself: it downloaded the book, it looked up lesson plans on the web, it opened a spreadsheet application and filled out an initial lesson plan, then it looked up Common Core standards, added revisions to the spreadsheet, and so on for multiple steps. The results are not bad (I checked and did not see obvious errors, but there may be some — more on reliability later in the post). Most importantly, I was presented finished drafts to comment on, not a process to manage. I simply delegated a complex task and walked away from my computer, checking back later to see what it did (the system is quite slow).
This example points to the potential of agents to handle intricate tasks, relieving users of detailed oversight and enabling us to focus on broader, more strategic work
Thanks for tuning in this week! I’ll be back with more next time—until then, take care and happy learning.
Best,
Niket