Cymantis
← Back to posts

From Checklists to Live Risk: Fusing DISA STIG Scans with Splunk Detections

Learn how to merge STIG/SCAP compliance scans with Splunk security detections to transform static STIG failures into live, risk-aware signals that prioritize controls based on real security events.

DISA STIGSCAPOpenSCAPNessusSplunkRBADoDCompliance

Most organizations in DoD and federal spaces already do two things well:

  1. Run STIG/SCAP configuration scans (OpenSCAP, Nessus, etc.) on servers and endpoints.
  2. Pump security telemetry into Splunk (ES, ESCU, UEBA, custom detections).

The problem: these two worlds rarely talk to each other. Compliance lives in STIG Viewer and POA&Ms; the SOC lives in notables and dashboards.

This article shows how to merge those worlds by turning static STIG failures into live, risk-aware signals that are prioritized using real detections in Splunk.


1. STIGs, SCAP, and Your Scanners

DISA publishes STIGs as SCAP content (XCCDF/OVAL) bundled in SCAP data streams per the NIST SCAP specification. Tools like:

  • OpenSCAP (oscap) – Open-source SCAP library and CLI that evaluates hosts against XCCDF/OVAL and emits ARF/XCCDF XML plus HTML reports.
  • Nessus / Tenable.sc – Run STIG/SCAP compliance policies and export results as .nessus XML, XCCDF, or JSON.

Treat these tools as control engines and normalize their results into a Splunk-friendly format at the per-asset, per-control level.

Further Reading:


2. Splunk as the Correlation Brain

Splunk Enterprise Security's risk-based alerting (RBA) lets you:

  • Assign risk scores to events.
  • Aggregate risk by entity (host, user, app).
  • Fire risk notables when risk passes a threshold instead of generating hundreds of one-off alerts.

We can extend that idea:

STIG failures are static risk; detections are dynamic risk. Splunk is where they meet.

MITRE ATT&CK mappings from ES/ESCU add context so you can say things like:

"This host is failing a remote admin hardening control and is seeing lateral movement activity in the last 24 hours."

Further Reading:


3. The Data Model: Two Indices, One Language

To unify everything, standardize on two logical indices. This design follows NIST 800-53 control mapping principles and leverages Splunk's Common Information Model.

3.1 index=stig_scans – Configuration & Control State

Per-asset, per-control records, e.g.:

  • asset_id (normalized hostname/CMDB key)
  • ip, os
  • control_id (e.g., SRG-OS-000480)
  • stig_rule_id (e.g., V-251801)
  • profile (CAT I / CAT II / CAT III)
  • status (pass/fail/other)
  • scanner (openscap/nessus/other)
  • first_seen, last_seen
  • evidence, source_file_sha

3.2 index=sec_events – Live Security Signals

Aggregated detections, e.g.:

  • asset_id, ip
  • _time or event_time
  • detector (correlation search or ESCU title)
  • severity (low/medium/high/critical)
  • tactic, technique (MITRE ATT&CK)
  • notable_id / risk_notable_id

The glue is consistent asset_id across both indices—asset identity hygiene is critical for correlation accuracy.


4. From Raw Data to Priority Score

We don’t just want “how many STIGs are failing?”. We want:

“Which high-impact control failures are on assets that actively look bad?”

Treat:

  • STIG failures → static risk
  • Detections → dynamic risk

4.1 Example Scoring Model

You can tune this, but a simple scheme:

  • DISA severity → weight:
    • CAT I = 5, CAT II = 3, CAT III = 1
  • Detection severity → weight:
    • critical = 4, high = 3, medium = 2, low = 1
  • Time decay (based on age of last detection):
    • <24h → 1.0, 1–7d → 0.6, >7d → 0.3

Then:

priority = (cat_weight * 0.6 + detection_weight * 0.4) * time_decay

This is not an official standard; it’s a pragmatic RBA-style model you can explain and tune.


5. SPL Pattern for Correlation

  1. Materialize latest failing controls into a lookup:
| index=stig_scans status="fail"
| stats latest(last_seen) as last_seen values(stig_rule_id) as rule_ids by asset_id control_id profile
| eval cat=case(profile="CAT I",5, profile="CAT II",3, true(),1)
| table asset_id control_id profile cat rule_ids last_seen
| outputlookup latest_stig_failures.csv
  1. Join with recent detections and score:
| inputlookup latest_stig_failures.csv
| rename asset_id as a_asset
| join type=inner a_asset [
    search index=sec_events earliest=-7d
    | eval sev=case(severity="critical",4,severity="high",3,severity="medium",2,true(),1)
    | eval age_hours=(now()-_time)/3600
    | stats max(sev) as escu_sev, min(age_hours) as min_age, values(detector) as detectors by asset_id
  ]
| eval decay=case(min_age<=24,1, min_age<=168,0.6, true(),0.3)
| eval priority=round(((cat*0.6)+(escu_sev*0.4))*decay,2)
| sort - priority
| table priority asset_id control_id profile escu_sev detectors min_age

Now you have a ranked triage list: which controls on which assets matter most right now.


6. Notables, Tickets, and Closed Loop

Turn the SPL into a correlation search in ES that:

  • Runs on a schedule.
  • Filters to priority >= threshold.
  • Creates a notable or risk-notable containing:
    • asset_id, ip
    • control_id, stig_rule_id, profile
    • priority, detectors, tactic, technique

Pipe those into your ITSM (ServiceNow, Jira, etc.) using Splunk's incident review integration. In each ticket, include:

When tickets close, trigger a re-scan. If the control flips to pass and detections quiet down, the priority drops and the incident naturally de-escalates.

Integration Examples:


7. Practical Gotchas

  • Some STIG items require manual review and won’t show up in machine-readable XML.
  • Different scanners may implement the same STIG slightly differently—normalize on logical fields.
  • Scanning every few minutes is unrealistic; use a mix of periodic full scans and targeted re-scans.
  • Asset identity hygiene is everything: normalize asset_id early.

8. Quick-Start Checklist

Phase 1 – Data In

  1. Pick one platform (e.g., RHEL in IL5).
  2. Standardize on asset_id.
  3. Run OpenSCAP or Nessus STIG scans.
  4. Parse ARF/XCCDF or .nessus into flat JSON/CSV.
  5. Ingest into index=stig_scans using Tenable Add-On for Splunk or custom parsers.

Phase 2 – Security Events

  1. Normalize ES/ESCU/UEBA detections into index=sec_events.
  2. Ensure asset_id matches what you used in STIG data.

Phase 3 – Correlate & Tune

  1. Build latest_stig_failures.csv.
  2. Join it with sec_events and calculate priority.
  3. Tune weights and thresholds based on your risk model.

Phase 4 – Operationalize

  1. Turn the SPL into a correlation search.
  2. Wire into ITSM for tickets.
  3. Build dashboards for:
    • Top risky assets
    • Top risky controls
    • Detections over weak controls

Resources:


9. Beyond STIGs

This pattern generalizes to any machine-readable security baseline:

Any time you have:

  1. A machine-readable baseline (SCAP, CIS, vendor-specific).
  2. A scanner that produces per-asset control results.
  3. Logs and detections in Splunk.

…you can apply the same model.

Baselines → controls.
Logs → behaviors.
Splunk → the place they meet.

Extended Reading:


10. Closing Thoughts

You’re already paying the price to:

  • Run STIG/SCAP scans,
  • Maintain POA&Ms,
  • Tune Splunk detections,
  • Map everything to NIST/CIS/ATT&CK.

The only missing piece is connecting them.

Start small:

  • One platform
  • One scanner
  • One dashboard
  • One correlation search

Show a single compelling story:

“This domain controller is failing four CAT I remote admin controls, and we have multiple credential access and lateral movement detections on it in the last 48 hours. Here’s the ticket, here’s the STIG text, and here’s the evidence.”

That's the kind of narrative that resonates with CISOs, auditors, and engineers—and moves you from checklist compliance to risk-driven hardening.


References & Additional Resources

Standards & Specifications

DISA STIG Resources

Tools & Implementations

Splunk Integration