Skip to main content

Evaluation’s importance

The essential function of the Statsig SDKs is reliable, consistent, incredibly performant allocation of users to the correct bucket in your experiment or feature gate. Understanding how we accomplish this can help you answer questions like:
  • Why do I have to pass every user attribute, every time?
  • Why do I have to wait for initialization to complete?
  • When do you decide each users’ bucket?

How Evaluation Works

Evaluation in Statsig is deterministic. Given the same user object and the same state of the experiment or feature gate, Statsig always returns the same result, even when evaluated on different platforms (client or server). Here’s how it works:
  1. Salt Creation: Each experiment or feature gate rule generates a unique salt.
  2. Hashing: The user identifier (e.g., userId, organizationId) is passed through a SHA256 hashing function, combined with the salt, which produces a large integer.
  3. Bucket Assignment: The large integer is then subjected to a modulus operation with 10000 (or 1000 for layers), assigning the user to a bucket.
  4. Bucket Determination: The result defines the specific bucket out of 10000 (or 1000 for layers) where the user is placed.
This process ensures a randomized but deterministic bucketing of users across different experiments or feature gates. The unique salt per-experiment or feature gate rule ensures that the same user can be assigned to different buckets in different experiments. This also means that if you rollout a feature gate rule to 50% - then back to 0% - then back to 50%, the same 50% of users will be re-exposed, so long as you reuse the same rule - and not create a new one. See here. For more details, check our open-source SDKs here. This is not generally recommended, but for advanced use cases - e.g. a series of related experiments that needs to reuse the control and test buckets, we now expose the ability to copy and set the salts used for deterministic hashing. This is meant to be used with care and is only available to Project Administrators. It is available in the Overflow (…) menu in Experiments.

Evaluation Order

When evaluating gates, experiments, and layers, the SDK iterates through a list of rules generated by the server. Rules are evaluated sequentially and the first matching rule determines the result. Overrides always take precedence because they appear first in the rule list.
Each step uses the hash-based bucketing described above. Layer allocation and group assignment use different salts, so a user’s position in the layer is independent of their group assignment within the experiment.

Experiments

When an experiment is evaluated (you call getExperiment), it follows this evaluation order:
  1. ID overrides — Specific user/unit IDs mapped to a group
  2. Conditional overrides — Segment or gate-based overrides, evaluated in order
  3. Layer holdouts — If the experiment is in a layer, layer-level holdout gates are checked
  4. Holdout gates — Experiment-level holdout gates; users in a holdout receive default values
  5. Experiment exclusion — Mutual exclusion segments that prevent users from being in multiple experiments
  6. Start status — If the experiment is not started, users receive default values (with optional non-production environment exceptions)
  7. Layer allocation — For experiments in a layer, the user’s bucket (based on the layer’s universe salt) must fall within the experiment’s allocated segments. This is checked before targeting.
  8. Targeting gate — Users who fail the targeting gate receive default values. This is checked after layer allocation.
  9. Group assignment — The user’s bucket (based on the experiment salt) determines which group they fall into. Groups are cumulative ranges across 1000 buckets.

Layers

When a layer is evaluated (you call getLayer), it follows this evaluation order:
  1. Override rules — ID overrides from all experiments in the layer
  2. Layer holdout gates — Holdout gates attached to the layer
  3. Experiment allocation — Each experiment in the layer has a configDelegate rule. The user’s bucket determines which experiment they are delegated to.
  4. Delegated experiment evaluation — Once delegated, the experiment’s own evaluation runs (start status, targeting gate, group bucketing as described above)
If no experiment allocation rule matches, the user receives the layer’s default values.

Holdouts

Holdout gates evaluate in this order:
  1. Experiment exclusion — Exclusion segments (if applicable)
  2. ID overrides — Specific user/unit IDs
  3. Population targeting gate — If the holdout has a targeting gate, users who fail it are not held out
  4. Holdout percentage — The pass percentage on the holdout rule determines the holdout rate

Gates

When a feature gate is evaluated (you call checkGate), it follows this evaluation order:
  1. ID overrides — Specific user/unit IDs mapped to pass/fail
  2. Conditional overrides — Segment or gate-based overrides
  3. Holdout rules — If the gate has holdouts attached
  4. Rules — The gate’s targeting rules, evaluated in the order they appear in the console. Each rule has its own conditions and pass percentage.

When Evaluation Happens

Evaluation happens when the gate or experiment is checked on Server SDKs. To be able to do this, Server SDKs hold the entire ruleset of your project in memory - a representation of each gate or experiment in JSON. On client SDKs, we evaluate all of the gates/experiments when you call initialize - on our servers. All of the above logic holds true for both SDKs. In both, the user’s assignment bucket is not sent to Statsig until you call the getExperiment/checkGate method in the SDK.

What this means:

  • Performant Evaluation: no evaluations require a network request, and we focus on evaluation performance, meaning that checks take <1ms after evaluation.
  • The SDKs don’t “remember” user attributes, or previous evaluations: we rely on you to pass all of the necessary user attributes consistently - and we promise if you do, we’ll provide the same value.
A common assumption is that Statsig tracks of a list of all ids and what group they were assigned to for experiments/gates. While our data pipelines track users exposed to each variant to compute experiment results, we do not cache previous evaluations and maintain distributed evaluation state across client and server SDKs. That won’t scale - we’ve even talked to customers doing this in the past, and were paying more for Redis to maintain that state than they ended up paying for Statsig.
  • Server SDKs can handle multiple users: because they hold the ruleset in memory, Server SDKs can evaluate any user. Without a network request. This means you’ll have to pass a user object into the getExperiment method on Server SDKs, whereas on client SDKs you pass it into initialize().
  • We ensure each user receives the same bucket: our ID-based hashing assignment guarantees consistency. If you make a change in console that could affect user bucketing on an experiment, we’ll provide warning.

Evaluation of null/empty unitIDs

Note, we do not apply any filtering/ business logic before we assign an individual userID. This means that even a null or empty unitID will be bucketed depending on the salt.