Hypertune now native on Vercel
Read more
Hypertune

Why experiments belong inside feature flags, not beside them

Many teams get this wrong — and it quietly invalidates their experiment data. Hypertune takes a different approach: experiments live inside feature flags, not beside them. The result is simpler rollout logic, cleaner data, and faster iteration.

16 Oct 20253 min read

Miraan Tabrez
Founder
Embedding experiments directly in feature flags

One of the cool things about the way we've architected Hypertune is that experiments aren't something you reference directly in your code. Instead, they're created in the Hypertune dashboard and inserted into the rollout logic of a feature flag.

We recently made it much easier to insert an experiment into a feature flag — and then ship the winning variant — all without touching code.

Inserting an experiment

Shipping the winning variant

Why we built it this way

Many teams manage feature flags and experiments separately, often using different systems for each. As a result, they end up with separate “experiment flags.”

When testing a new feature, that means checking two flags in code:

Your main feature flag — to control rollout to internal users, beta testers, etc.
Your experiment flag — to split traffic between variants.

At first, this might seem reasonable. But it quickly introduces complexity — and risk.

In theory, the experiment flag should only be checked if the main flag is true. A user should only enter an experiment after passing all the targeting rules of the main feature flag. In effect, the experiment is just the final piece of targeting logic — after all other rollout conditions are met.

However, if the experiment flag is checked regardless of the main flag's result, you'll start logging incorrect experiment exposures. Here's a simplified example:

const passesFeatureFlag = featureFlags.get('enableLayoutV2')
const passesExperiment = experiments.get('layoutV2Experiment')
const showFeature = passesFeatureFlag && passesExperiment

In this setup, every time the experiment flag is checked, an exposure is logged — even for users who fail the main feature flag conditions. That means users who shouldn't be in the experiment at all are being included in your experiment data.

Even worse, if the experiment assigns a user to true, but the main flag still evaluates to false, the feature stays hidden — creating a mismatch between experiment assignment and actual exposure. This contaminates your experiment data and invalidates your results.

You could work around this by carefully ordering flag checks or adding conditional logging logic — but that's tedious, error-prone, and hard to maintain at scale.

Our approach: experiments inside feature flags

Hypertune eliminates this entire class of problems by embedding experiments directly within feature flag targeting. There's no separate “experiment flag” to manage — all rollout logic for a feature lives in one place.

This approach brings several advantages:

Only one flag to check in code.
No risk of checking one flag but forgetting to check the other.
No risk of logging experiment exposures incorrectly.
Unified view of your full rollout logic — from internal access to public rollout.
Visual evaluation counts beside each rule for easy debugging and confidence.
Create and clean up experiments entirely from the dashboard — no code changes required.
Reuse the same experiment across multiple flags if needed.

By centralizing rollout and experimentation logic, Hypertune keeps your code clean, your data accurate, and your team fast.

Start for free
Start now, no strings attached
Evaluate your first feature flag in minutes
Get started for free
Remix
React
TypeScript