Last week, we gave a talk at the inaugural Hacks/Hackers AI Summit, a 250-person gathering of professionals at the intersection of media, technology, and AI.
Across several conversations, one theme was clear: How do you move forward with AI without putting your organization at risk?
Some teams are on pause. Others are deep in circular debates. Most are stuck asking the same questions: What’s allowed? What’s risky? Who decides? Efforts stall out—not because people don’t care, but because no one is quite sure where the lines are, or who gets to draw them.
If you’re in leadership, your caution is valid. Bad AI use can leak data, erode trust, or create real liabilities.
If you’re doing the work, so is your frustration. You see opportunities every day, but without guidance, you’re either stuck or improvising without air cover.
The core problem is that most AI policies aren’t built on real concerns or real use cases. A good policy isn’t just a list of rules. It’s a map that makes clear what your organization wants to avoid, and what it’s ready to enable.
This piece is a framework for creating that map.
The operational case for an AI policy
An AI policy isn’t just a safeguard. It’s an operational tool that dictates whether innovation moves forward or stalls out.
For leadership, a policy sets boundaries before something goes wrong. It protects the company’s data, IP, and reputation.
For mid-level managers and operators, the policy is about permission. It answers the question: What are we allowed to try? Without clear guidance, people hesitate or, worse, build in the dark.
Done well, a policy enables innovation. It removes the invisible tripwires that stop people from exploring what’s possible.
The shared truth across the org chart is that everyone wants to do the right thing. A clear policy gives people a place to stand and the confidence to move.
Specificity is key to a workable policy
Most AI policy efforts get stuck in a weeks-long game of ping-pong between legal, IT, procurement, and anyone brave enough to propose a use case.
To avoid it, start with outcomes.
If you’re in leadership, your job is to define the non-negotiables. What are the scenarios you absolutely want to prevent? Data leakage? Public-facing hallucinations? Automation with no human oversight? These are the anchors for any effective policy.
If you’re managing a team, make the value legible in terms of outcomes. Don’t just say, “We want to use AI for content.” Say: “We’re trying to draft 80% of internal help articles automatically so support leads can focus on edge cases, and we’ll exclude any ticket involving account credentials or billing.” That’s clear, bounded, and defensible.
Focusing on outcomes does two things:
It keeps the policy grounded in reality.
It gives everyone, from lawyers to line managers, something real to evaluate.
Policies should be built around what actually matters: the consequences you want to avoid, and the value you want to unlock. The “how” can follow.
Match the decision to the level of context
Not every part of an AI policy should be decided at the top. In fact, when too many decisions get centralized, the result is overcaution, underuse, and a document that’s impossible to apply in practice.
If you’re in leadership, your job is to draw the bright lines: no unreviewed model output in customer-facing channels, no automation in areas with regulatory or contractual risk, and no tools where data flows can’t be clearly audited. These aren’t feature-level decisions. They’re existential guardrails that protect the business.
If you’re managing a team, your job is to apply those guardrails to the real context of your work. What kind of exposure would actually matter in your domain? Is it customer contact details? Strategic product planning? Internal conversations about deals in motion? What needs human review not just in theory, but in practice?
Policies only work when they reflect how work actually happens. That means pushing implementation details down to the people who live in the specifics. They know the tradeoffs and the opportunities.
The further a policy is from the work, the more likely it is to say “No” to everything. Put decisions closer to the action, and “Yes, as long as…” becomes possible.
Pilots are the path to progress
Almost every organization has someone waiting on someone else. Legal is waiting on product. Product is waiting on leadership. Teams are waiting on “the policy.” Meanwhile, nothing moves.
Pilots are how you break that loop. You don’t need a finished policy to get started. You need a way to learn safely and visibly.
If you’re in leadership, don’t wait for a perfect policy. Create a lightweight path for teams to propose and run AI pilots. Set the high-trust boundaries: what must be protected, what lines shouldn’t be crossed. Then invite teams to fill in the specifics. You can revise as you go. The important part is sending a clear signal: we’re willing to move.
If you’re managing a team, don’t wait for permission. Propose something small, specific, and safe. For example: “We want to use LLMs to generate first drafts of product copy. Anything referencing pricing, performance claims, or legal disclaimers will be written or reviewed by a human before publication.” That kind of framing builds trust. It shows you’re being deliberate, and it gives leadership something concrete to react to.
Pilots turn intent into insight. They transform theoretical risk debates into real-world feedback and give your organization a safe, visible way to learn its way into better policy.
Every organization navigating AI is balancing two instincts: protect and progress. A good policy does both.
If you’re in leadership, your teams need guardrails, but they also need green lights. Set the boundaries that matter most, and then invite people to move within them. Trust doesn’t mean hands-off. It means being clear about what matters and letting others help figure out the “how.”
If you’re managing a team, don’t wait for top-down clarity. Start small. Define a use case, flag the risks, propose the boundaries. Show that it’s possible to move responsibly. That’s what policy needs more than anything right now: examples.
AI policy shouldn’t be a tug-of-war between caution and ambition. It should be a shared conversation about what your organization actually values, and how it plans to act accordingly.
Policy follows progress. Start making both.