RockNSM Release Candidate Testing Contributions


The Problem:

ROCK builds quickly, than it gets tested (not as quickly).

Currently, the extensive scope of features and flexible deployment options in the ROCK stack + the highly automated build cycle has led us to a situation where evaluating a release candidate’s readiness is a frequent set of tasks in need of some definition, support, and ultimately automation where possible.

This topic can serve as a place to:

  • outline the objectives of RC testing
  • identify a quickly implemented, iterable method to evaluate release worthiness
  • develop a method to recognize QA contributors (because even in the unlikely event we automate this entirely, somebody has to analyze the results)


Working Definition of Objectives:

Should be pegged to release status and lifecycle. As all builds are executed in COPR now and have their own build monitoring there is little reason to put QA expectations on that process and instead we could focus on Deployment Integrity, Unit Checks, and System Checks for Alpha and Stable status releases.

Deployment Integrity == does what we have start properly in each supported sensor configuration offline or online?

Unit Check == does each service do what its supposed to do?
Weak Examples:

  1. Does the bro configuration produce the expected log streams on file and in kafka?
  2. Does FSF process all the expected sub objects when handed a test file?

System Check == do the services operate appropriately in concert and provide a quality analyst / sensor admin experience?
Weak Examples:

  1. do the suricata logs in elastic have the appropriate syntax and structure so that the can be easily “pivoted” (I need a better word for this) to from bro logs using kibana?
  2. do the rock start / rock stop appropriate function to ensure that dependency services are stated first / stopped last?