ChatGPT, Judgment, and the Colin Powell Rule

  • Post author:
  • Post category:
    simpliclity

For the first time in three years, there’s a silent collaborator in my UCLA Anderson MBA classes. GenAI, and ChatGPT in particular, is everywhere in academia. Yet the conversations around it aren’t nearly as straightforward as they should be.

I wrote the following email after asking how many students planned to use ChatGPT and getting a muted response. As I’d later learn, the hesitation wasn’t about ethics. It was about not knowing the “right” answer.

Some context first. 

This is a twenty-week capstone course. Students write research reports, author business plans, build financial models and prototypes, and present to their peers and a panel of industry experts. The goal is to build a real business from the ground up.

It’s the culmination of their MBA experience and a graduation requirement. I, along with a few other advisors, serve as the final gatekeepers. By the time students get to us, they’re informed, polished, and ready to take on the world. Our job is to pressure-test their thinking, sharpen their plans, and assess the quality of their work. Their graduation depends on it.

Here’s the email I sent:

Teams,

You are allowed to use ChatGPT. In fact, I expect you to—just like you would in the real world. I’m bringing this up for everyone, even though it only came up in the FTMBA cross-team discussion. You didn’t need my permission—you were going to use it anyway—but now we can discuss its use openly and move forward.

I’ve been seated next to too many lawyers on planes who draft entire legal briefs with ChatGPT or use NotebookLLM to summarize massive filings for clients. If professionals are using it, so should you. This class is meant to reflect real-world experience.

Here’s what I still expect from you and where misuse will hurt you. Apologies if this sounds pedantic.

  • Most internet business writing is crap. ChatGPT is trained on the internet. Without well-structured prompts, it will generate crap.
  • Use Chain-of-Thought Prompting. Treat ChatGPT as a thought partner, not an answer machine, to get meaningful output.
  • No jargon without examples. Don’t just say you’ll achieve operational leverage—show me how. Example: outsourcing internal IT to improve operations, freeing up cash as operating leverage.
  • Use BLUF: Bottom Line Up Front. Get to the point early, then expand. Clear, direct writing is gold.
  • Write short. Every word, sentence, and paragraph should fight for its life. Keeping it tight saves us both time—we’re in a hurry.
  • Follow the Colin Powell principle. Tell me what you know, tell me what you think, and tell me which is which. ChatGPT can put the clay on the table, but it won’t mold it for you. That’s your job. Your deliverables show me your judgment which is what I’m grading.

And one last thing: AI hallucinations are real, no pun intended, watch for them. 

Hope this helps.

TJ

Not only was the email well received, but it also took on a life of its own. I even heard from students not in my class who said they found it useful.

I also realized that I had, in fact, written a prompt. How would this email perform against the very rules I was recommending? I had to find out, because knowing my class, they’d call me out on it! 

I got a 9.8/10. ChatGPT “docked” me because adding an em dash in the last sentence, in lieu of the comma after “intended,” it said, would make the sentence tighter. I disagreed. In this case, the em dash would disrupt the reader’s rhythm, and generally em dash overuse is lazy writing.

This brings me to the application of the Colin Powell rule. 

Picking Fact from Opinion

There was the faux application of the Colin Powell principle. The principle hinges on judgment and sourcing. ChatGPT didn’t do the research to find out what’s fact and what’s opinion to ensure I was clarifying what is what. And yet, it thought I had followed the Colin Powell principle. 

Finally

In the weeks since I wrote this email, I’ve shared the core prompt with collaborators and colleagues, and they’ve found it useful. They also encouraged me to share it more broadly. So here it is.

But what’s more instructive is that even a strong prompt is no substitute for human empathy, intuition, and judgment.

It’s easy to get caught up in speculation about the distant future and whether we’ll become subservient to some superior machine species. The truth is, we don’t know.

But today we’re still in charge. We’re still doing the work. And we should use the tools we have to do that work well.