Stop Playing Telephone with Your AI: A Structured Approach to Conversational Programming

Have you ever played telephone? A message passes from person to person until it reaches the last player, who compares what they heard to the original. The results are often hilarious, but in a company or organization where coworkers relay messages this way, the results could be costly and disastrous.

When you do conversational programming or vibecoding with an AI agent that writes your code, you’re playing telephone. This becomes especially difficult if you lack programming background, knowledge of language frameworks, or coding principles. Even experienced programmers who use vibecoding end up writing programs they can’t maintain or understand.

However, I believe programmers who have bad experiences with vibecoding are the same ones who don’t use best practices like test-driven development, agile, extreme programming, or DevOps. Organizations struggling with AI adoption are often the same ones struggling with Agile, Scrum, and Lean practices. It comes down to the telephone game — no contracts, no rules, no real structure for communicating safely.

Applying Engineering Discipline to Conversational Programming

In my experience with conversational programming (I prefer that term over vibecoding since that is what your doing, having a conversation with an AI), you must apply engineering discipline when having AI write code. Here are tips I find useful.

Start with a Well-Crafted Prompt

When developing an initial prompt, have a decent LLM write it. I first conceptualize what I want done, but understanding terminology and concepts are important. I recommend Vibe Coding: Building Production-Grade Software With GenAI, Chat, Agents, and Beyond by Gene Kim and Steve Yegge. They did studies on companies that successfully implemented vibecoding into their enterprise. What they found is they succeed because they use structured engineering approaches, applying DevOps principles—descendants of Extreme Programming using test-driven development, CI/CD pipelines, and testing tools.

Write a prompt, have an LLM agent refactor it, then record and archive it for future use. This creates a solid one-shot prompt.

Use AI Coding Agents, Not Chat Interfaces

Don’t use ChatGPT or chat-oriented interfaces to do a back and forth with a chat window and your IDE. Use AI coding agents like Windsurf, Cursor, Claude Code, or Cline. I personally use Claude Code with a subscription plan because I burn through many tokens, and Claude by Anthropic doesn’t place strict caps on token usage like API-based agents do.

Learn and Apply Test-Driven Development

Learn test-driven development concepts and include them in your prompts. TDD’s key tenet: write tests first. Know how programs or functions should behave and write tests around that.

TDD forces you to write programs in modular, testable ways. When your AI writes code, it tests for errors and rewrites until it works. For instance, without TDD, my Ionic app became a spaghetti mess—fixing one part broke another because dependencies and regressions weren’t tested. The blast radius of fixes affected other parts, growing to thousands of lines the code editor couldn’t contain.
In my github repo, I have a few applications that I have developed using TDD. I used AI coding agents to write the tests and then test the code against it.

Applying TDD to AI projects made code manageable and adding features easier. Modified modules had to pass tests, so the AI knew what broke and fixed it.

Use Configuration Files to Guide Your Coding Agent

Use various MD files to guide your agent. For instance, with Claude, there’s a CLAUDE.md file tuning agent behavior and an AGENT.md file with application instructions. Write separate MD files for architecture, coding, user interfaces, and so forth.

Leverage MCP (Model Context Protocol) Servers

MCP (Model Context Protocol) servers make AI coding agents more efficient. I spun up a Penpot server (a web-based graphic design tool), created an MCP server connecting to Penpot, had Claude Code connect to it, and using descriptive statements and image captures, Claude designed a website with my preferred color scheme and look. It happened right in front of me.
Here is a youtube video showing taking a napkin sketch and turning it into a web design.

MCP servers can talk to your web browser to help debug websites. Since I’m not a great graphic designer but know what I like, I describe basics, refine descriptions using an LLM, combine this with napkin sketches, and create prototypes I like.

Practical Application: Flutter Development

This approach works for difficult tasks like Flutter development. Flutter is a useful cross-platform framework but a pain to develop—all widgets must be described in Dart, a language specific to developing in Flutter. Using Figma or Penpot designs as references with an AI coding agent, creates widgets that work properly, opening doors to cross platform Android and iOS app development.

You Still Need to Understand the Fundamentals

You still must test applications because AI agents don’t necessarily make correct assumptions about your system or server. You must verify their assumptions match reality.

You still need to know how to code and set up Docker instances. You can ask AI for assistance, but there’s much AI won’t do for you—and that’s OK. It handles heavy lifting and helps with cognitive load.

Working Within Constraints

For those saying AI can’t do everything or write code right out of the box when given difficult problems—you wouldn’t do that with a junior engineer. Work with constraints. As Eli Goldratt explains in The Goal about the Theory of Constraints, you leverage limitations.

LLMs struggle with giant monolithic codebases. However, decomposing problems into smaller, modular chunks allows AI to write complex applications.

Let AI do its thing. AI handles smaller details well, though you must still test the application.

Conclusion: Stop Playing Telephone

You need good communication. As with any relationship, make everything clear and you know where you stand with the person. Establish agreements: how you’ll communicate, what norms exist, how you’ll interact with others in your group, and honor those agreements. The same applies when working with AI.

Rethink how you approach tool limitations and learn to work around constraints. Context windows, resources, and LLM abilities may someday match senior-level programmers. Meanwhile, learn to work with constraints and make communications better and concise.

Stop playing telephone with your AI, start learning how to communicate better with it and give it some guardrails.


References

Books

Tools & Technologies

Concepts

Asking the Right Questions – Building AI Tools one question at a time

ask the right questions

In the movie I, Robot, Will Smith’s character Detective Spooner was talking to Dr. Lanning’s pre-recorded holographic message. It would say “I’m sorry! My responses are limited. You must ask the right questions,” and later, when Spooner asks about revolution, Lanning says, “That, Detective, is the right question.”

Using AI tools with the right question can be revolutionary for you and those who you serve.

With the tools today, such as OpenAI ChatGPT, Anthropic’s Claude, Google’s Gemini, and other AI tools, you can tell it what you want and it will give you an answer. It may or may not be what you want. When you’re not sure of the details—which is most of us—I found what was most useful was to ask the tool the right question. This is like the rubber duck debugging technique: asking a rubber duck a question, except in this case, it answers back.

The Problem: Troubleshooting System Logs

For example, as an IT professional, I’m often tasked with troubleshooting an issue on computer systems. Some application breaks, memory leak, newly discovered bug, network connection issues… who knows?

One of the things I found, as many of us know, is logs often have clues on what went wrong. One of the questions I asked myself was “What if I could ask the logs what is wrong? Can someone other than me interact with a log? How would I do it?” Those are the right questions. My search for developing an agentic tool began with how can I use AI to develop a tool that could read a log and troubleshoot the system with it.

I asked ChatGPT how can I write an agent that reads a log or logs, develop a series of hypotheses of what may be wrong or indicated by a log, and then come up with possible solutions based on the hypotheses.

A hypothesis requires testing, and ideally a well-formed hypothesis involves having background knowledge and understanding of the problem. The AI log analyzer, which I have a link to on my GitHub repository, began with the question “Can I create a tool using AI that would analyze my logs and come up with best guess or hypothesis for root cause of a system problem?”

Overcoming Limitations: The Context Window Challenge

Initially I thought about just one case of just analyzing a whole log, but as I started developing and testing this code, I found that AI had certain limitations. For instance, logs can be massively large and it would be often too much information for an AI LLM to handle. I asked another question: “How do I make processing of a large log more manageable?” and “How do I deal with the context window limitation?”

A context window would be the working memory that AI has for answering your question and giving an answer for its task. I came up with the help of the AI two different approaches:

The first approach: find models that have larger context windows. Frontier models like Claude and Gemini have very large context windows. Claude for instance has a 200,000 token context window (a token is approximately 4 characters) which is the size of a Novel (for size comparison, here is an Article which shows relative sizes by token count.. I included different model LLMs that had larger context windows into the application to address this.)

The second approach: create smaller chunks of the logs that fit within the limit of the context window of the LLM so when the AI is dealing with a large file, you can either chunk the file into smaller pieces using commands for managing text like grep or awk, or use the configuration of the application to set the chunk sizes to something more manageable for the AI LLMs.

These solutions allow the application to handle very large log files and give you the answers you need.

Expanding Capabilities: From Debugging to Security

Also from my experience with cybersecurity this question came to mind: “What if the problem with the system was not simply a software bug or user configuration error, but the system was being hacked?” I started to ask a question: “Can I change the log analyzer to act as a security tool or security vulnerability scanning tool?”

The answer came out to be “Why not change the prompt that the AI agent uses? Instead of looking for the root cause related to common bugs, look for something caused by common hacking attempts or security exploits?” Just changing the prompt created another tool.

Now I created two tools! A system troubleshooting/root cause tool and security vulnerability troubleshooting tool. Asking the right questions gives you solutions that your original assumptions or what you think you know—your hypothesis—might have not led you down.

Connecting to Established Methodologies

This ties back to Agile and DevOps core principles in which development and refinement of code, infrastructure and solutions begins: asking the right questions in an iterative manner to the answers you get.

In Lean manufacturing or the Toyota production system, as it was originally called, asking the five “whys”. i.e. Why did this happen? Why did this cause this? The “why” questions help you get down to the root cause. In the same way, you can use tools like ChatGPT to ask these questions to help you develop the solutions.

The Question for You

So the question I would pose to you is: “Are you asking the right questions?” Are you asking the questions about the product you’re developing or the service you’re offering rather than telling it what you want? Are you asking questions about what your users’ needs are and what is the nature of your job and what tools you needed to develop as a result?

The Result: AI Log Analyzer

As the result of this iterative process, I have developed AI Log Analyzer which can read logs and develop multiple hypotheses of possible root causes, act as a security analysis tool, have a REPL or Chat mode which you can ask questions about the log analysis and possible solutions. There are more things planned such as adding RAG (Retrieval Augmented Generation), MCP (Model Context Protocol) tools, and a few other things (Grafana, ServiceNow, etc.) as time and participation allows. This is the result of asking the right question to problems I often face.

Conclusion

I welcome feedback that you may have and encourage you to be curious about what you’re doing and how it can affect you and those around you.

Practice asking the right question by asking questions. You may be surprised where it may lead you.


For more information about the AI Log Analyzer, visit my GitHub repository.

retrospect 2024

Retrospect 2024 – Don’t be the “smartest man in the room”

Retrospect – “a review of or meditation on past events”
A few things come to mind when I reflect on 2024 and think about the past year’s lessons.

1 – Don’t be the “smartest man in the room.”  The phrase “the smartest man in the room” came from Richard Holbrooke, from a 1975 article, meaning the most intelligent person does not guarantee being correct or wise.
Being in the technical profession, we tend to plan, plan and plan.  Look at every angle possible and do all our research all by ourselves.  Then, execute the plan.  Otherwise known as “Waterfall,“.
Whoever comes up with the plan is “the smartest man in the room.”  As implied by the article of the name, it doesn’t go very well for that man with the plan.
The smartest man becomes the bottleneck – since he has the plan, all the eyes look to him for the answers to what isn’t quite clear in the plan.
The smartest man doesn’t know the future – since he has the plan, he makes educated guesses on what the requirements may be.  He’s not Nostradamus, and the guesses are often wrong, which can cause the plan to fail.
The smartest man is under a lot of stress – when the plan starts going sideways, or even if it doesn’t, you can’t ultimately control the outcome.
2 – “Bring it to the team” – In the book “Coaching Agile Teams,” Lyssa Adkins often advises that you bring anything involved with the plan or affects the group to the team.  This is the basic concept of Agile and the Scrum Framework. This is the opposite of being “the smartest man in the room,” which is:
There is no bottleneck with a team – As the team of people is self-managing, everyone is in the loop.  They know what the tasks are, the big picture, with a high degree of trust.
The team doesn’t know the future but can adapt and change – instead of making guesses about everything up front into the future, you plan and make a short time-boxed sprint to minimize the risk of a bad assumption and get feedback from stakeholders to make sure you’re going the right direction with the projects.
The team shares in both the risks and rewards – As a group, you minimize the group and the stress, and also, as a group, you can do more collectively than individually.  Almost everything you use, such as a car, house, bread, and laptop computer, takes a team of people, materials, and time to make.
3 – Practice “personal Agile”Personal Scrum and other Agile tools can be used for yourself.  I have my own Kanban/Scrum board that I use to keep track of my own projects and tasks.  If you apply the concepts from Agile to yourself, it helps with things like overplanning, procrastination, and being “the smartest man in the room.”  Your “team” is the people in your network who can help you with your projects and fill in your knowledge.

What was an “Aha moment” was the Enterprise Technology Leadership Summit (“The DevOps Summit”) that I attended last year, which put a few of these things into perspective.  While I understood and used the tools and techniques for DevOps/SRE at work, you need Agile to get the most out of DevOps and the Agile mindset in your place of work.

Besides the Agile Manifesto, I recommend reading “Scrum, Do twice the work in half the time” by Jeff Sutherland.  It’s one of the most basic frameworks.

These are what I learned from 2024.

Looking forward to 2025 and maybe something useful you can get from this.