Artificial Intelligence4 min read

Legal-Ready AI: 7 Tips for Engineers Who Don’t Want to Be Caught Flat-Footed

Photo for Chris WolfChris Wolf
Abstract code with colorful 3D shapes.

An oversimplified approach I have taken in the past to explain wisdom is to share that “We don’t know what we don’t know until we know it.” This absolutely applies to the fast-moving AI space, where unknowingly introducing legal and compliance risk through an organization’s use of AI is a top concern among IT leaders. 

We’re now building systems that learn and evolve on their own, and that raises new questions along with new kinds of risk affecting contracts, compliance, and brand trust.

At Broadcom, we’ve adopted what I’d call a thoughtful ‘move smart and then fast’’ approach. Every AI use case requires sign-off from both our legal and information security teams. Some folks may complain, saying it slows them down. But if you’re moving fast with AI and putting sensitive data at risk, you’re also inviting trouble if you don’t also move smart.

Here are seven things I’ve learned about collaborating with legal teams on AI projects.

1. Partner with Legal Early On

Don’t wait until the AI service is built to bring legal in. There’s always the risk that choices you make about data, architecture, and system behavior can create regulatory headaches or break contracts later on.

Besides, legal doesn’t need every answer on day one. What they do need is visibility into the gray areas. What data are you using and producing? How does the model make decisions? Could those decisions shift over time? Walk them through what you’re building and flag the parts that still need figuring out.

2. Document Your Decisions as You Go

AI projects move fast with teams needing to make dozens of early decisions on everything from data sources to training logic. So, it’s only natural that a few months later, chances are no one remembers why those choices were made. Then someone from compliance shows up with questions about those choices, and you’ve got nothing to point to.

To avoid that situation, keep a simple log as you work. Then, should a subsequent audit or inquiry occur, you’ll have something solid to help answer any questions.

3. Build Systems You Can Explain

Legal teams need to understand your system so they can explain it to regulators, procurement officers, or internal risk reviewers. If they can't, there’s the risk that your project could stall or even fail after it ships.

I’ve seen teams consume SaaS-based AI services  without realizing the provider could swap out a backend AI model without their knowledge. If that leads to changes in the system’s behavior behind the scenes, it could redirect your data in ways you didn’t intend. That’s one reason why you’ve got to know your AI supply chain, top to bottom. Ensure that services you build or consume have end-to-end auditability of the AI software supply chain. Legal can’t defend a system if they don’t understand how it works.

4. Watch Out for Shadow AI

Any engineer can subscribe to an AI service and accept the provider’s terms without knowing they don’t have the authority to do that on behalf of the company.

That exposes the organization to major risk. An engineer might accidentally agree to data-sharing terms that violate regulatory restrictions or expose sensitive customer data to a third party.

And it’s not just deliberate use anymore. Run a search in Google and you’re already getting AI output. It’s everywhere. The best way to avoid this is by building a culture where employees are aware of the legal boundaries. You can give teams a safe place to experiment, but at the same time, make sure you know what tools they're using and what data they're touching.

5. Help Legal Navigate Contract Language

AI systems get tangled in contract language; there are ownership rights, retraining rules, model drift, and more. Most engineers aren’t trained to spot those issues, but we’re the ones who understand how the systems behave.

That’s another reason why you’ve got to know your AI supply chain, top to bottom. In this case, when legal needs our help in reviewing vendor or customer agreements to put the contractual language into the appropriate technical context. What happens when the model changes? How are sensitive data sets safeguarded from being indexed or accessed via AI agents such as those that use Model Context Protocol (MCP)? We can translate the technical behavior into simple English—and that goes a long way toward helping the lawyers write better contracts.

6. Design with Auditability in Mind

AI is developing rapidly, with legal frameworks, regulatory requirements, and customer expectations evolving to keep pace. You need to be prepared for what might come next. 

Can you explain where your training data came from? Can you show how the model was tested for bias? Can you justify how it works? If someone from a regulatory body walked in tomorrow, would you be ready?

Design with auditability in mind. Especially when AI agents are chained together, you need to be able to prove that identity and access controls are enforced end-to-end. 

7. Handle Customer Data with Care

We don’t get to make decisions on behalf of our customers about how their data gets used. It’s their data. And when it’s private, it shouldn’t be fed to a model. Period. 

You’ve got to be disciplined about what data gets ingested. If your AI tool indexes everything by default, that can get messy fast. Are you touching private logs or passing anything to a hosted model without realizing it? Support teams might need access to diagnostic logs but that doesn’t mean third-party models should touch them. Tools are rapidly evolving that can generate comparable synthetic data devoid of any customer private data that could help with support use cases for example, but these tools and techniques should be fully vetted with your legal and CISO organizations prior to using them. 

The Reality

The engineering ethos is to move fast. But since safety and trust are on the line, you need to move smart, which means it's okay if things take a little longer. The extra steps are worth it when they help protect your customers and your company.

Nobody has this all figured out. So ask questions by talking to people who’ve handled this kind of work before. The goal isn’t perfection—it’s to make smart, careful progress. For enterprises, the AI race isn’t a question of “Who’s best?” but rather “Who’s leveraging AI safely to drive the best business outcomes.”