Skip to Main Content
Menu
Close Menu

How We Use AI at Palantir

Building an AI policy that’s rooted in practical experience, not hype

Illustration of a woman sitting at a desk and talking with a humanoid robot in a modern office. The woman is using a laptop while the robot gestures as if explaining something. Large windows let in daylight, and plants and office furniture are visible in the background.

As generative AI tools like ChatGPT, Microsoft Copilot, Google Gemini, and Claude have become ubiquitous in our workplaces, many organizations are still struggling with how to adopt them effectively. Some companies are telling employees they need to use AI as much as possible, but aren't providing clarity and guardrails on how to do so. Others avoid AI tools entirely, worried about the risks and uncertainties around their use.

The reality, as commentator Anil Dash recently observed, is that most people who work in technology have a very consistent view on AI:

"Technologies like LLMs have utility, but the absurd way they've been over-hyped, the fact they're being forced on everyone, and the insistence on ignoring the many valid critiques about them make it very difficult to focus on legitimate uses where they might add value."

As a result, the practical voices asking "how do we use this responsibly in our actual work?" are drowned out and many organizations end up without any clear guidance about where AI adds value and where it doesn't. This can create an uneven playing field where some workers are using AI without any guardrails, while others stay away completely, avoiding use cases that might save time and money.

How do you create an AI workplace policy?

At Palantir, we found ourselves in this position last year. As a digital strategy and development consultancy, we're naturally curious about new technologies and how they might benefit our clients. Several of our team members had been experimenting with AI tools for some time, looking for ways they might improve the quality of our work.

As we started using these tools more frequently, however, the questions multiplied faster than answers:

  • Which AI tools were okay to use and which weren't?
  • How should we establish human oversight and disclosure requirements?
  • What about intellectual property concerns with AI-generated content?
  • How do we maintain quality standards when AI is involved in our deliverables?
  • How do we ensure data privacy and security?
  • How do we address AI's environmental footprint?
  • What are our obligations under emerging AI regulations?

Many members of our team were excited about AI's potential, but needed clear direction and guardrails to harness that enthusiasm safely and effectively. More importantly, we needed to level the playing field-creating an environment where everyone felt empowered to use AI tools appropriately rather than some people forging ahead while others held back out of uncertainty.

We wanted to treat AI like a normal technology. That meant applying the same thoughtful evaluation, risk assessment, and implementation practices we'd use for any other tool-without the extremes of either blind enthusiasm or complete avoidance.

The stakes are higher than you think

Before diving into our solution, it's worth understanding why this matters so much. Without proper guidelines, the risks extend far beyond individual productivity gains or losses. Teams operating without AI policies face several serious challenges:

  • Data breaches and privacy violations can happen when sensitive information gets inadvertently shared with third-party AI services.
  • Quality issues and "workslop" emerge when team members generate low-quality AI content and pass it along to colleagues, creating more work than it saves.
  • Client trust issues arise when AI usage isn't properly disclosed or when it conflicts with client expectations or policies.
  • Legal complications are increasingly likely as new regulations around AI in the workplace take effect.
  • Inconsistent practices and cultural problems develop when different team members use AI tools differently, with varying levels of oversight and review.
  • Environmental and ethical concerns go unaddressed when organizations adopt AI without considering sustainability commitments or how systems use content without creator consent.

We realized that creating our own AI policy wasn't just a nice-to-have-it was essential for building a workplace culture where AI use could be transparent, responsible, and valuable for everyone. We needed a framework that would create buy-in across the organization while prioritizing quality, safety, and security.

We needed to be part of the reasonable majority: treating AI as a normal technology, subject to thoughtful evaluation and appropriate controls, focused on legitimate uses rather than hype.

The question wasn't whether we needed an AI policy. The question was how to build one that actually worked.

Establishing an AI Working Group

One of our first realizations was that creating an effective AI policy couldn't be a solo effort or even a small, homogeneous group project. AI touches every part of how we work-from the code our developers write to the strategies our consultants craft to the way our project managers coordinate client relationships. If we wanted a policy that would actually be useful (and used), we needed perspectives from across the organization.

We decided to formalize our approach by creating an AI Policy Working Group. This wasn't just a committee-it was a cross-functional team with a specific charter: drafting policy for the safe, private, secure, and ethical use of AI at Palantir.net.

The working group structure gave us several advantages. It provided legitimacy and resources for the work, created clear accountability, and ensured that policy development was treated as a priority rather than a side project. Most importantly, it established a framework that could continue beyond the initial policy creation to handle ongoing updates and new challenges.

We started by identifying team members who had two crucial qualities: genuine interest in AI tools and expertise in different domains of our work. These weren't necessarily the most technical people or the biggest AI enthusiasts. Instead, we looked for people who could bridge the gap between AI capabilities and real work requirements.

A project manager who'd been using AI to draft client communications had different insights than a developer using it for code review, and both perspectives were essential. We needed people who understood not just what AI could do, but how it fit into our actual workflows and client relationships. We also needed to include members of our operations and leadership teams who could speak to compliance and data safety concerns.

Identifying key themes and establishing accountability

Our discovery process started with structured interviews with key stakeholders across the organization. We talked to leadership, project managers, and team members who were already using AI tools in their work. The goal wasn't to gather opinions about AI in general, but to understand specific challenges, concerns, and requirements our policy would need to address.

Several clear themes emerged from these discovery interviews, and for each one, we identified who would be responsible for implementation:

1. Human accountability must remain paramount

There was strong consensus that regardless of how AI was used, humans needed to remain fully accountable for the quality and outcomes of their work. AI should be used to enhance human capabilities, not replace human judgment.

Who owns this: Project teams are responsible for ensuring AI tools are used in a way that delivers value and maintains human oversight. Team members remain fully accountable for the quality, substantial completion, and outcome of AI-assisted work-treating AI assistance similarly to collaborating with another team member.

2. Data privacy and security are non-negotiable

Team members needed clear guidance about what data could and couldn't be shared with AI tools. We also needed to have clear rules ensuring client data would never be used to train third-party AI tools.

Who owns this: Our Systems & Infrastructure team establishes the process for evaluating new AI tools, decides which ones are approved for use, and assesses ongoing security and compliance. They conduct vendor assessments with robust SLAs and ensure tools don't use company or client data to train their models.

3. Transparency with clients is essential

Some of our clients already had their own AI policies that restricted or required disclosure of AI use. Others were curious about how we were leveraging AI for their benefit. We needed clear protocols to navigate these conversations.

Who owns this: Project teams ensure tools are used in a way that is consistent with client policy requirements and restrictions. They're responsible for proactively communicating with clients (and each other) how AI is being used on their projects, reviewing client-specific AI policies, and documenting AI usage in workflows.

4. We need practical "do's and don'ts"

Team members wanted concrete, actionable guidance. They didn't need us to solve the big philosophical questions about AI's role in society-they needed to know whether it was okay to use ChatGPT to help draft a project proposal, or how to disclose AI usage in a client report.

Who owns this: The AI Working Group provides advice, consultation, and support to team members, helping translate policy into practical day-to-day decisions. Our Operations team navigates regulatory requirements, liability questions, and evolving legal frameworks, including avoiding discriminatory use of AI in employment decisions and performance evaluations.

Making it work in practice

The key to our working group's effectiveness was balancing structure with flexibility. We had regular check-ins and clear deliverables, but we also made space for organic collaboration and iteration. People could contribute according to their expertise and availability without feeling overwhelmed.

We also made the process transparent from the beginning, using shared documents and collaborative spaces so anyone could see our progress and thinking. This transparency proved valuable later when we moved to company-wide review and adoption. And of course, we used AI tools throughout the policy development process to help synthesize team member inputs and iterate on language more efficiently.

Most importantly, we kept the focus practical rather than philosophical. While we certainly discussed the broader implications of AI, our primary goal was creating something our colleagues could actually use in their daily work.

The results

After releasing a draft for comment and review in December 2024, we officially implemented our Generative AI Usage Policy in January of this year. Since then we've developed new product and service offerings as part of an internal "skunkworks" program, including our custom AI content auditing tool that cuts the time and cost of content audits by 30-50%. We have also implemented Onyx as an internal tool that brings together internal information from sources that were currently locked in their own silos.

Our team members regularly use company-approved AI tools like Gemini, GitHub Copilot, and Warp to assist with research, code development, drafting documentation, and debugging. Our code is not written by AI-rather, AI assists developers by suggesting code completions, helping identify bugs, and accelerating routine tasks, while humans make all architectural decisions, conduct peer reviews, ensure security compliance, and maintain accountability for quality. We treat AI assistance similarly to collaborating with another team member: the human remains fully responsible for the work product's accuracy, security, and appropriateness.

As a company that does a lot of work with Drupal, we've also been closely following the work of the Drupal AI Initiative to build tools that empower teams to create intelligent experiences with complete oversight of AI operations. With over 290 AI modules already available and integrations spanning 21 major providers, Drupal is already a leading platform for AI integration. At DrupalCon Vienna last week, Drupal project lead Dries Buytaert shared his vision for Drupal as a leading AI site-building platform.

In some cases, AI agents are already handling routine quality assurance tasks like ensuring alt text completeness, validating heading hierarchies, checking reading levels, and monitoring brand consistency. Looking forward, we see AI playing increasingly sophisticated roles in website management and optimization.

What we learned

By treating AI as a normal technology rather than either a silver bullet or an existential threat, we found our way to a middle ground that we believe more accurately reflects AI's capabilities and challenges. Our working group didn't just produce a policy document-it created organizational buy-in, practical guidance that teams actually use, and a framework that continues to evolve as the technology changes.

We've made our policy template available on GitHub as a starting point, but the real value isn't in copying our specific rules-it's in building your own cross-functional working group, conducting discovery with your stakeholders, and creating guidelines that reflect your organization's values and needs. The goal isn't to have perfect answers to every AI question. The goal is to create an environment where your team can focus on legitimate uses that add value, with clear guidance on how to do so safely, ethically, and effectively.

This post was written with the assistance of Anthropic Claude. The cover image was generated by Adobe Firefly.

Let’s work together.

Have an exceptional idea? Let's talk and see how we can help.

Contact Us