AI as a Mirror or Magic
Everyone calls AI a "brain extension," but that's dangerously incomplete. AI extends your brain the way a microphone amplifies your voice—if you're mumbling incoherently, it just makes the incoherence louder. What you feed it, how much context you provide, how clearly you express your intent—these aren't technical constraints, they're consciousness constraints. Your prompt is a reflection of your chitta (mindspace). Unclear mind = unclear prompts = useless outputs.
I've watched this play out countless times. The naive user copies an email draft without reading it, treating AI like a magic typewriter. The slightly-less-naive asks open-ended questions with no follow-up validation, like throwing a question into a well and walking away. The mature user treats AI like a research assistant who needs clear instructions, context, and cross-examination.
The movement with AI isn't about typing prompts—it's about bringing your full awareness to the interaction. My research days from 2001-2005 taught me this. I'd read papers—hard copy, soft copy—make notes, compare findings, hunt for gaps in existing research. Only after building that knowledge base could I frame hypotheses sharp enough to yield specific, useful answers. With LLMs, people want to skip the knowledge-building step. That's like asking for investment advice without understanding your balance sheet.
AI works best for those who have domain expertise to validate outputs, write clearly because they think clearly, and read widely enough to spot patterns AI might miss. The early excitement around ChatGPT? It was from people who finally had help writing resignation emails or communicating in English. That's utility, not transformation. True transformation happens when someone with deep knowledge uses AI to amplify their thinking, not replace it.
What took me months in 2003—aggregating research, finding gaps, framing hypotheses—AI can do in minutes. But speed without rigor is just hurrying toward wrong answers. The discipline required hasn't changed: define constraints clearly, provide context richly, validate outputs ruthlessly, iterate with intention. This is identical to the research process. The only difference is velocity.
Here's where it gets spiritual—and critical. If AI is an extension of your brain, and your brain has biases, blind spots, and unconscious drifts from truth, AI will dutifully extend those too. This is where witness consciousness becomes non-negotiable.
You, the drishta, must remain aware of what biases you're bringing to your prompts, what you're unconsciously excluding from your questions, and when AI is confirming what you want to hear versus what you need to know. Sabri se sabra seekho—from patience, learn deeper patience. Don't rush to the AI's first output. Question it. Test it. Use it as a sparring partner, not an oracle.
When I make decisions using AI—whether crafting strategic communications or building frameworks—I operate in two modes simultaneously. The Operator uses AI to generate options, test scenarios, draft content. The Witness watches my own biases, questions my assumptions, stays anchored to core principles. This is viveka (discernment) in action. AI expands possibility; discernment chooses wisely.
The MIT article talks about "intelligent choice architectures"—AI generating better options for decision-makers. But here's what they don't emphasize enough: AI can only surface choices within the boundaries of what you know to ask for. AI creates better choices only when the prompter has clarity on what decision needs making, knowledge to evaluate whether generated options are viable, and wisdom to know when to override algorithmic suggestions. AI gives you a richer menu. You still choose the meal.
For most people, AI won't be transformative because most people don't think clearly enough to direct it. That sounds harsh. But it's the same reason most people don't write well—writing forces clarity. AI, similarly, demands clarity. The ones who'll win with AI are the ones who were already asking good questions, seeking multiple perspectives, and comfortable with ambiguity and iteration. AI just lets them do it faster and with more options. For everyone else? AI is sophisticated autocomplete. Helpful, but not transformative.
"The wise man is not perturbed by praise or blame, gain or loss, pleasure or pain."
AI might generate a "perfect" output, but if it's misaligned with your values or reality, the witness discards it. This reflection phase is where wisdom crystallizes. You take what AI generated, what you observed in the process, and integrate it through the lens of your deepest understanding.
The uncomfortable truth is this: AI doesn't make you smarter; it amplifies what's already there. If you lack domain knowledge, AI will help you be confidently wrong faster. If you lack self-awareness, AI will reinforce your biases with impressive-sounding justifications. If you lack discernment, AI will overwhelm you with options you can't evaluate.
But if you move with intention, see with clarity, and reflect with wisdom—then AI becomes what it was always meant to be: a powerful servant amplifying human consciousness, not replacing it. The question isn't whether AI will replace human decision-making. The question is: Are you conscious enough to direct it?
AI is a powerful servant but a terrible master. And like any tool that extends human capability, it reveals more about the human wielding it than about the tool itself.
This piece was inspired by the MIT Sloan Management Review article "Intelligent Choices Reshape Decision-Making and Productivity" by Michael Schrage and David Kiron. The article explores how AI creates better choice architectures for strategic decisions—a concept that resonated deeply with my own experience of using AI as an extension of consciousness rather than a replacement for it.
Member discussion