Evaluation’s importance
The essential function of the Statsig SDKs is reliable, consistent, incredibly performant allocation of users to the correct bucket in your experiment or feature gate. Understanding how we accomplish this can help you answer questions like:- Why do I have to pass every user attribute, every time?
- Why do I have to wait for initialization to complete?
- When do you decide each users’ bucket?
How Evaluation Works
Evaluation in Statsig is deterministic. Given the same user object and the same state of the experiment or feature gate, Statsig always returns the same result, even when evaluated on different platforms (client or server). Here’s how it works:- Salt Creation: Each experiment or feature gate rule generates a unique salt.
- Hashing: The user identifier (e.g., userId, organizationId) is passed through a SHA256 hashing function, combined with the salt, which produces a large integer.
- Bucket Assignment: The large integer is then subjected to a modulus operation with 10000 (or 1000 for layers), assigning the user to a bucket.
- Bucket Determination: The result defines the specific bucket out of 10000 (or 1000 for layers) where the user is placed.
Evaluation Order
When evaluating gates, experiments, and layers, the SDK iterates through a list of rules generated by the server. Rules are evaluated sequentially and the first matching rule determines the result. Overrides always take precedence because they appear first in the rule list.Each step uses the hash-based bucketing described above. Layer allocation and group assignment use different salts, so a user’s position in the layer is independent of their group assignment within the experiment.
Experiments
When an experiment is evaluated (you callgetExperiment), it follows this evaluation order:
- ID overrides — Specific user/unit IDs mapped to a group
- Conditional overrides — Segment or gate-based overrides, evaluated in order
- Layer holdouts — If the experiment is in a layer, layer-level holdout gates are checked
- Holdout gates — Experiment-level holdout gates; users in a holdout receive default values
- Experiment exclusion — Mutual exclusion segments that prevent users from being in multiple experiments
- Start status — If the experiment is not started, users receive default values (with optional non-production environment exceptions)
- Layer allocation — For experiments in a layer, the user’s bucket (based on the layer’s universe salt) must fall within the experiment’s allocated segments. This is checked before targeting.
- Targeting gate — Users who fail the targeting gate receive default values. This is checked after layer allocation.
- Group assignment — The user’s bucket (based on the experiment salt) determines which group they fall into. Groups are cumulative ranges across 1000 buckets.
Layers
When a layer is evaluated (you callgetLayer), it follows this evaluation order:
- Override rules — ID overrides from all experiments in the layer
- Layer holdout gates — Holdout gates attached to the layer
- Experiment allocation — Each experiment in the layer has a
configDelegaterule. The user’s bucket determines which experiment they are delegated to. - Delegated experiment evaluation — Once delegated, the experiment’s own evaluation runs (start status, targeting gate, group bucketing as described above)
Holdouts
Holdout gates evaluate in this order:- Experiment exclusion — Exclusion segments (if applicable)
- ID overrides — Specific user/unit IDs
- Population targeting gate — If the holdout has a targeting gate, users who fail it are not held out
- Holdout percentage — The pass percentage on the holdout rule determines the holdout rate
Gates
When a feature gate is evaluated (you callcheckGate), it follows this evaluation order:
- ID overrides — Specific user/unit IDs mapped to pass/fail
- Conditional overrides — Segment or gate-based overrides
- Holdout rules — If the gate has holdouts attached
- Rules — The gate’s targeting rules, evaluated in the order they appear in the console. Each rule has its own conditions and pass percentage.
When Evaluation Happens
Evaluation happens when the gate or experiment is checked on Server SDKs. To be able to do this, Server SDKs hold the entire ruleset of your project in memory - a representation of each gate or experiment in JSON. On client SDKs, we evaluate all of the gates/experiments when you call initialize - on our servers. All of the above logic holds true for both SDKs. In both, the user’s assignment bucket is not sent to Statsig until you call the getExperiment/checkGate method in the SDK.What this means:
- Performant Evaluation: no evaluations require a network request, and we focus on evaluation performance, meaning that checks take <1ms after evaluation.
- The SDKs don’t “remember” user attributes, or previous evaluations: we rely on you to pass all of the necessary user attributes consistently - and we promise if you do, we’ll provide the same value.
- Server SDKs can handle multiple users: because they hold the ruleset in memory, Server SDKs can evaluate any user. Without a network request. This means you’ll have to pass a user object into the getExperiment method on Server SDKs, whereas on client SDKs you pass it into initialize().
- We ensure each user receives the same bucket: our ID-based hashing assignment guarantees consistency. If you make a change in console that could affect user bucketing on an experiment, we’ll provide warning.