Use Elastic Cloud instead of on premises

#1

Hello,

I am attempting to use Rock-NSM for the bro/Suricata sensors and kafka-logstash pipeline. I have an Elastic Cloud deployment that I have modified the configurations of logstash to output to. So far I have been able to get my indices into Elastic Cloud just fine. However, while migrating the saved objects from Kibana there seems to be a problem. After setting up the mappings, visualizations and finally dashboards I am getting shard failures when loading the dashboards. I am wondering if anyone else has done something similar or has any advice on what might be wrong.

0 Likes

#2

Thought I would update my post in case anyone tries the same thing in the future. So the problem is that the index mapping is not using the template that Rock-NSM uses and thus the keywords are not mapped properly. I am currently trying to get the template used by rock-nsm to be applied to my elastic deployment when they are being sent by logstash. I have changed the manage template option to true and provided paths to the bro_index.json, suricata_index.json and failutres_index.json files that logstash has access to. So far the logstash logs show they are attempting to use the mappings, but I must be missing something still because they are still not being used for the index mappings. Do I have the wrong files maybe? Anyone got an idea?

0 Likes

#3

Okay, I have got the final piece of my puzzle. It turns out that I am supposed to place the bro_index.json, suricata_index.json and failures_index.json in the _template section of my elastic stack. Once I did this(using the PUT _template/‘template name’ command) the index mappings were the same as on Rock-NSM. Now all of my data is moving correctly.

0 Likes

#4

Awesome! Thanks so much for posting how you solved this. It will help the rest of the community as they use RockNSM in new and exciting ways!

0 Likes