Introduction

Overview

In this workshop, you will enable an Amazon Virtual Private Cloud (VPC) flow logs ingest pipeline served from within your VPC using S3 bucket notifications, Amazon SQS and AWS Lambda to provide events to the Amazon Elasticsearch Service and with real-time monitoring using Kibana.

One of the more popular patterns is to feed logs generated by AWS services through Amazon CloudWatch logs. This pattern is easy to setup and manage. There are very few moving parts to build. The setup gets you going – super quick.

However, the approach, specifically for flow logs, means that you are using Amazon CloudWatch Logs as a pass through and the advantage of using Elasticsearch and Kibana from a cost perspective diminishes. Many customers prefer to cut costs on monitoring architectures that involve Amazon ES and other integrations in the AWS ecosystem.

An alternate approach for flow logs involves multiple components:

1) VPC Flow Logs – captures IP traffic in the VPC (both public and private)

2) Amazon Virtual Private Cloud (VPC) – you enable the flow logs setting to write to one of two destinations (as of the time of this writing).

- [Amazon Simple Storage Service (S3)](https://aws.amazon.com/s3/)
- [Amazon CloudWatch Logs (CWL)](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html)

3) Amazon Simple Queue Service (SQS) – the buffer for events and a decoupling mechanism for bucket notification event processing.

4) AWS Lambda – deploy java code to write to the Amazon Elasticsearch Service using the Java High Level Rest Client and Sigv4 request signing without servers.

5) Amazon Cognito – authentication for the Amazon Elasticsearch Domain

6) AWS Identity and Access Management (IAM) – authorization / entitlements to indexes and other resources for the Amazon Elasticsearch Service domain

7) NGINX deployed on an Amazon EC2 proxy server for secure proxy access to the Kibana endpoint in VPC

8) Amazon S3 – used to hold the flow log output in the form of .gz (gzip) files.