aws kinesis firehose to internal splunk

Kinesis Firehose is a relatively new offering from AWS which enables the handling of large streams of data for real-time logging and analytics.

We were setting up a Kinesis Firehose stream with our internal Splunk servers and came across many edge cases and caveats.

Throughout the process I probably spent over 10 hours on the phone with AWS engineers as we worked together to resolve many of the undocumented edge cases in the system.

While I cannot disclose all of the actual code implemented, I can point out some of the high-level issues to look out for when deploying your own Kinesis Firehose.

Splunk - Internal Access

Kinesis Firehose will talk into your Splunk HEC from a range of public IPs.

You do not want to expose your Splunk instance directly to the public internet - it should be running completely inside your private network.

To enable Firehose to send data into Splunk, you will need to set up a reverse proxy in your DMZ which allows connections only from AWS Kinesis Firehose IPs, and then communicates with your internal proxy server.

This internal proxy server will likewise only accept connections from your DMZ proxy server, and here you can set the upstream to your internal Splunk HEC.

You will need to have a valid SSL certificate for your proxy server(s) termination point, then provide this as the Splunk HEC endpoint in the Firehose configuration.

You should now be able to see test data coming through. If not, check your proxy server logs to ensure your traffic is making it through properly.

Kinesis Configuration

While you are able to quickly click through the GUI and create a Stream, we need to ensure that all of our infrastructure is version controlled and automated.

We use Terraform for our infrastructure and service provisioning, and crafting the proper Terraform script that configured and deployed a working Kinesis Firehose stream integrated with Splunk - with a Lambda transformation function - was certainly a fun project.

Some notes when crafting your own Terraform script:

Attaching Subscription Filters

After every re-deployment of your Streams, you must aws logs put-subscription-filter ... again to re-attach your subscription filters.

Below is an example command to do just that:

aws --profile [account] logs put-subscription-filter \
    --log-group-name "/aws/lambda/[log_group]" \
    --filter-name "[log_stream]" \
    --filter-pattern "" \
    --destination-arn "[delivery_stream_arn]" \
    --role-arn "[firehose_role_arn]"


IAM Requirements

Most of the issues encountered with the Kinesis / Splunk integration were related to IAM.

The documentation linked above outlines essentially everything that needs to be executed in terms of IAM Policy Documents.

However it must be noted that these must be executed exactly in the format they are displayed - i.e. you cannot concatenate documents or iterate "like" elements for scaled systems as one would expect with other Policy Documents.

This is where I spent much of the time on the phone with AWS engineers trying to figure out the quirks in the configuration. We found a solution for our systems and now the deployment and configuration of Kinesis Firehose and Splunk is fully documented and scripted.

Once all of this is done, you should now see data making it through from your Firehose Streams in all environments into your Splunk HEC.

last updated 2022-08-20