Packet Capture Replay (no Tap)

Can RockNSM conduct/ingest offline replay of packet captures? I have a customer that would like side by side analysis of their network. However, they do not want us to connect to their network. They will syphon off raw pcap data and hand it off to us for analysis. I have considered using tcpreplay to play back the traffic. I didn’t want to assume, but will RockNSM have any issue with this type of scenario?

tcpreplay will work just fine.

you can do tcpreplay or you can tell Bro and/or Suricata to process the PCAPs directly (which will preserve timestamps).

Read in PCAP with bro:

mkdir /tmp/pcap
# Put PCAP in there ^^
cd /tmp/pcap
# Create a temporary working dir for bro
mkdir logs; chmod 777 logs; cd logs
for item in ../*.pcap; do
  # Read in PCAP as the bro user and send to Kafka
  sudo -u bro -g bro /usr/bin/bro -C -r ${item} local

Read in the PCAP as suricata is similar:

mkdir /tmp/pcap
# Put PCAP in there ^^
cd /tmp/pcap
# Create a temporary working dir for suricata
mkdir logs; chmod 777 logs; cd logs
for item in ../*.pcap; do
  # Read in PCAP as the suricata user and append to normal eve.json
  sudo -u suricata -g suricata /usr/bin/suricata  -k -r ${item}

NOTE: The the -C and the -k flags of Bro and Suricata respectively will disable checksum validation, which is often needed for processing PCAP if not captured perfectly.

EDIT: Added chmod 777 to fix permissions warnings/errors

Hi, would the results from Bro and Suricata be available from Kibana with the pcaps’ original timestamps when running this way?

If there are many pcaps and there are multiple modes, would the multiple nodes be used balance the processing loads?

Thank you.

To answer your first question: The logs will show up in Kibana when they actually happened. Meaning if it was 2 weeks ago then you will have to search for it instead of it showing up now in your time series.

If you are replaying a pcap through Bro/Suricata, it will only be on the machine that the pcap file is on. If you have multiple bro/suricata machines (like if you tapping several distinct or remote network segments) feeding an elastic cluster, it wont feed it back through the the other nodes you are using for load balancing. Hope this helps!

There are two methods to replay PCAP, depending on your needs.

  1. Use bro and suricata with the existing config to read in the PCAP and write data to the pipeline. This will preserve the original timestamps as @koelslaw mentioned. You’ll find the data in Kibana by changing your time window to the original timestamps.

  2. Replay the PCAP to a monitor interface using tcpreplay. This will replay the packets at the original speed across the interface. In this way, the timestamps will get updated to the current time, and if the original PCAP covered an hour time window, then it will take an hour to replay all the traffic. tcpreplay has other options to increase or decrease the packet speed, but in either case, the data will appear in Kibana in the current time window just as if you were monitoring live traffic.

Hi koelslaw and dcode,

Thank you for your replies. After some testing, suricata with its -r option to read in pcaps from a folder is working well and its performance is much better than tcpreplay.

However, as there are many pcaps, I’m trying to deploy multiple nodes so can suricata can process the pcaps faster. All the pcaps would be loaded from only one machine but trying to figure out if suricate can make use of the processing power from additional nodes in a multiple nodes deployment.

Currently I’m stuck at a fresh installation of multiple nodes deployment as mentioned in

Not sure if there are some settings that I’ve missed out?

Thank you.

This has to do with a difference in the template between elastic versions. I’m in the process of running through my notes to try and find the change.

Pulled from

discovery.seed_hosts setting Provides a list of master-eligible nodes in the cluster. Each value has the format host:port or host, where port defaults to the setting transport.profiles.default.port. This setting was previously known as Configure this setting on all Nodes as follows;

discovery.seed_hosts: ["", "", ""]

Hi koelslaw,

I’ve configured discovery.seed_hosts on all notes and still stuck at the same error:

TASK [elasticsearch : Running step restart] ********************************************************************
included: /usr/share/rock/roles/elasticsearch/tasks/restart.yml for rock01

TASK [elasticsearch : Disable cluster shard allocation] ********************************************************
fatal: [rock01]: FAILED! => {“msg”: “The conditional check ‘result.json.acknowledged | bool’ failed. The error was: error while evaluating conditional (result.json.acknowledged | bool): ‘dict object’ has no attribute ‘json’”}

Is there anyway to bypass or reset this “Disable cluster shard allocation”?

Thank you.