A little over a month ago, I set out to take a more focused approach to understanding Large Language Models (LLMs). I used them daily for everyday tasks—summarizing articles, refining emails, creating images—but I wanted to go deeper, with the new and powerful models that are now available.

I wanted to answer a fundamental question: How might I responsibly augment my skills and capabilities with AI?

What I found was both exhilarating and humbling—and I quickly realized this question will need more time and exploration to answer.

The Power and the Paradox of LLMs

LLMs are far more powerful than I initially thought, and they are improving at an exponential rate. Ethan Mollick’s analysis of different AI models (link) highlights how each has distinct strengths, but what struck me most is how rapidly these tools are evolving with new capabilities.

The most fascinating discovery? They level the playing field in knowledge-based tasks. For example, analysis, research, and writing—skills that once required deep expertise—are now accessible to a much wider range of people. That’s a game-changer.

But it also comes with a cost.

LLMs can lead to cognitive atrophy—if we rely too much on them, we risk losing essential skills. I noticed this firsthand. My AI collaborator across LLM platforms, who I will call Deeplyn (Deeply Learning!), made me a better writer, refining my structure and making my content more engaging. But it wasn’t making me a better thinker.

Early Lessons from Hands-on Experimentation

To push the boundaries of what AI could do for me, I rolled up my sleeves to explore and experiment with different LLM tools (from OpenAI, Microsoft, Google and Anthropic) using Deeplyn.

  1. Prompting as a Skill – I used Tutor Me, a Socratic-style AI tutor custom GPT, to refine my prompting skills. It didn’t just answer my questions; it challenged me to ask better ones. Early on, my prompts were structured like Google searches—broad and imprecise. For example, “I want to learn the latest trends in robotics automation.” Over time, I became a prompt whisperer, learning to be more specific and intentional. An improved prompt: “I am a college educated business leader and have moderate familiarity with robotics automation. I want to explore five credible predictions about its growth in 2025. These predictions should be sourced from leading researchers or organizations. Can you also provide concise explanations of the predictions in bullet form that are easy to discuss in casual conversations?” Deeplyn never tires of answering my questions in six different ways until I get the response I want.
  2. Diverse Applications – I experimented with multiple use cases: redesigning my website, conducting deep market research, researching AI literacy programs, and brainstorming (like I did for this blog post). Deeplyn has become one of my best brainstorming partners—always available, never fatigued, and willing to iterate endlessly.
  3. Understanding Limitations – “Garbage in, garbage out” is very real. Without enough context, outputs suffer. Hallucinations—AI confidently making up false information—happen more often than I’d like, and I always double-check outputs before relying on them. While Deeplyn is a fantastic writer, its depth of reasoning is still lacking compared to human critical thinking.

As a subject matter expert, I can easily spot when Deeplyn’s responses are superficial or incorrect. But this raises a crucial question: How would a novice recognize AI’s limitations?

A More Intentional Approach to Human-AI Collaboration

The biggest takeaway from my AI deep dive? Intentionality is key.

Deeplyn is a powerful AI Collaborator, but it works best when I drive the thinking and use it as an amplifier, not a replacement. I’ve become more deliberate in how I frame questions, more skeptical of responses, and more strategic in choosing when to use AI and when to rely on my own expertise. More to come on that!

For knowledge workers, this is a crucial shift. The sooner we figure out how to collaborate effectively with AI, leveraging its strengths while maintaining human judgment and creativity, the better we position ourselves for the future of work.

Looking Ahead: Where AI and I Go From Here

This past month has reaffirmed my belief that AI literacy is one of the most important skills we can develop today. As I continue building Project DAIL—my AI literacy initiative—I want to help others navigate these same challenges: where to trust AI, where to be cautious, and how to build the right habits for human-machine collaboration.

Similar Posts