VERTEXY
Docs

Get Started

  • Overview
  • Quickstart
  • Authentication
  • Billing & Plans

Core Concepts

  • Risk Scores Explained
  • Policy Modes
  • Event Types
  • Glossary

Integration

  • Assess a Transaction
  • Submit Feedback
  • Webhook Setup
  • Webhook Signature Verification

Dashboard

  • Graph Explorer
  • Reviews & Case Management

SDKs

  • Node.js
  • Python

API Reference

  • All Endpoints
  • Objects & Dictionary
  • Errors & Status Codes

Changelog

  • v1.1.0
  • v1.0.0
Get API Key →

Risk Scores Explained

Understand how VertexY computes scores, actions, levels, and reason codes.

The output you receive

Every assessment returns:

  • riskScore: number from 0 to 100
  • action: allow, review, or block
  • recommendedAction: the engine’s direct recommendation
  • riskLevel: low, medium, high, or critical
  • reasonCodes: machine-readable explanation list
  • featureContributions: low-level diagnostic values

How the score is built

VertexY combines four major signal families:

  1. Graph Shared devices, IPs, payment fingerprints, and fraud-neighbor structure.
  2. Velocity Bursts, spikes, and repeated recent activity.
  3. Similarity Overlap with known bad or suspicious indicators.
  4. Contextual BIN, geo, timezone, address mismatch, country mismatch, and user profile mismatch.

The current scoring implementation weights them approximately as:

  1. Graph: 35%
  2. Velocity: 25%
  3. Similarity: 15%
  4. Contextual: 25%
ℹ️

Weights are implementation details and may evolve. Clients should integrate against the response contract, not hard-code internal formulas.

Risk levels

  1. low Usually 0 to 30.
  2. medium Usually 31 to 60.
  3. high Usually 61 to 85.
  4. critical Usually 86 to 100.

Actions

  1. allow Proceed normally.
  2. review Hold for analyst review or step-up verification.
  3. block Stop the transaction.

action is influenced by both score and policy mode.

policyMode versus recommendedAction

  • recommendedAction is what the engine itself thinks should happen
  • policyMode controls how that recommendation is applied for your tenant
  • action is the final value your application should act on

Example:

  • in hybrid, action usually matches recommendedAction
  • in advisory, the engine still scores but your policy may reduce enforcement
  • in shadow, the engine scores while live business action stays permissive

Common reason codes

You do not need to memorize every code. In practice, they fall into a few simple groups:

Threat and graph signals

  • GLOBAL_INDICATOR_MATCH: one or more identifiers matched known global threat intelligence.
  • BLACKLIST_OVERLAP_HIGH: the transaction shares a strong overlap with suspicious or blocked indicators.
  • ONE_HOP_GUARD_TRIGGERED: the user is directly connected to a risky fraud neighbor in the graph.

Velocity and behavior signals

  • VELOCITY_ZSCORE_SPIKE: recent activity jumped well above the normal baseline for that entity.

Degraded or partial-signal responses

  • GRAPH_UNAVAILABLE: graph data could not contribute to this decision.
  • REDIS_UNAVAILABLE: velocity or cache-backed signals were temporarily unavailable.
  • BLOOM_UNAVAILABLE: a bloom-filter or threat-intelligence lookup could not contribute.
  • GDS_SCORES_UNAVAILABLE: precomputed graph-science scores were not available.
  • ML_INFERENCE_UNAVAILABLE: the ML probability subsystem did not contribute.
  • CONTEXTUAL_UNAVAILABLE: contextual checks could not be computed.

Policy-mode markers

  • POLICY_MODE_ADVISORY: the response was produced while your tenant was in advisory mode.
  • POLICY_MODE_SHADOW: the response was produced while your tenant was in shadow mode.
💡

Treat reason codes as explanation hints, not as a strict ranking. The safest way to automate decisions is to use action, recommendedAction, riskLevel, and riskScore together.

Reading featureContributions

featureContributions is a diagnostic object. It may contain:

  1. velocity_zscore Burst intensity compared with the historical baseline.
  2. indicator_overlap_ratio The overlap ratio with suspicious indicators.
  3. graph_neighbor_ratio_n2 The concentration of risky users in the two-hop neighborhood.
  4. graph_global_penalty Penalty added from global threat evidence.
  5. contextual_score Score contribution from contextual mismatch checks.
  6. ml_fraud_probability Fraud probability produced by the ML subsystem.
  7. knn_fraud_similarity Similarity to nearby fraud embeddings.

These values are primarily for:

  • observability
  • analyst tools
  • model debugging

They are not guaranteed to be stable across engine versions.

Degraded mode

If some dependencies are down, VertexY still returns a decision whenever possible.

In degraded scenarios:

  • reasonCodes will include subsystem degradation signals
  • action may be raised to a safer floor such as review
  • recommendedAction still reflects the engine result

Improving score quality

You will get better performance when you:

  • ingest all supported event types
  • keep user and device identifiers stable
  • provide contextual data such as BIN, geo, timezone, and address country
  • submit feedback consistently