Search:

Chapter 3

Linux

Because Windows incident response and memory analysis is good… But what about Linux!?!?

Store Memory Snapshot

Supported Linux Operating Systems

  • Ubuntu (Generic, Google Cloud, Microsoft Azure, Amazon Aws)
  • RedHat Linux
  • Google Container Optimized OS
  • Amazon Linux

Local

Update the apt repository and install the latest version of Docker to be able to run the free containerized version of DumpItForLinux.

sudo apt-get update
sudo apt install docker.io

If you are using CentOS/RHEL, we invite you to consult Docker documentation.

Map the current directory to the container WORKDIR and create a memory snapshot:

COMAE_WORKDIR=$(sudo docker inspect -f '{{.Config.WorkingDir}}' comaeio/dumpit-linux)
sudo docker run --privileged -v $(pwd):$COMAE_WORKDIR comaeio/dumpit-linux --dump-it --action store

This will save the archive in the current directory.

Send to Comae Stardust

Update the apt repository and install the latest version of Docker to be able to run the free containerized version of DumpItForLinux.

sudo apt-get update
sudo apt install docker.io

Client ID and Secret ID can be found when you log in into your Stardust account under Settings > Integrations.

Send Memory Snapshot to Comae Stardust

Run the DumpItForLinux command using docker with --snap-it and --action upload-comae flags with your Comae Stardust credentials.

sudo docker run --privileged comaeio/dumpit-linux --snap-it --comae-client-id <Client ID> --comae-client-secret <Secret ID> --action upload-comae

DumpItForLinux will send the pre-processed data to Comae Stardust.

DumpItForLinux

Send Full Memory Image to Comae Stardust

Run the DumpItForLinux command using docker with “–dump-it” and “–action upload-comae” flags.

sudo docker run --privileged comaeio/dumpit-linux --dump-it --comae-client-id <Client ID> --comae-client-secret <Secret ID> --action upload-comae

DumpItForLinux will send a full memory image to Comae Stardust.

DumpItForLinux

Google Cloud Platform

Getting started with GCP

A bucket should be created in your GCP Storage before running Docker command for DumpItForLinux.

To be able to interact with the Google Cloud Platform through DumpItForLinux, you will need a service account and a credential file in JSON format. Please check the official documentation for service accounts and credential files in this link: https://cloud.google.com/iam/docs/creating-managing-service-account-keys You can optionally generate and download the credential file using gcloud CLI commands. Inside the CLI, log in to your GCP account.

gcloud auth login

You will be prompted with a link to authenticate you as a GCP user. Open that link, login with your GCP account and copy the code provided. Paste it in the console to finish the authentication process.

DumpItForLinux

Set the GCP project you are working on by using the following command.

gcloud config set project [PROJECT_ID]

Create a service account.

gcloud iam service-accounts create [YOUR_SERVICE_ACCOUNT_NAME]
gcloud projects add-iam-policy-binding [PROJECT_ID] --member “serviceAccount: [YOUR_SERVICE_ACCOUNT_NAME]@[PROJECT_ID].iam.gserviceaccount.com” --role "roles/owner"

Create a service account key.

gcloud iam service-accounts keys create /tmp/[FILE_NAME].json --iam-account [YOUR_SERVICEACCOUNT_NAME]@[PROJECT_ID].iam.gserviceaccount.com

DumpItForLinux

Install the latest version of Docker to be able to run the free containerized version of DumpItForLinux.

sudo apt install docker.io

Run the DumpItForLinux commands using docker with “–snap-it” and “–action upload-gcp” flag. You need to provide the path to the json file that contains your service account key and the bucket name.

sudo docker run -v /tmp/[FILE_NAME].json:/tmp/[FILE_NAME].json --privileged comaeio/dumpit-linux --snap-it --action upload-gcp --gcp-creds-file /tmp/[FILE_NAME].json --bucket [BUCKET_NAME]

DumpItForLinux will upload the preprocessed data to your specified GCP Storage bucket.

DumpItForLinux DumpItForLinux

To upload a full memory image to GCP Storage, replace the --snap-it flag with --dump-it using the same docker command.

Microsoft Azure

You will need your Storage account’s Storage Account Name and Storage Account Key. Both can be found when you log in to your Azure account in Storage accounts > [Your-Storage-Account] > Access Keys.

Inside your Linux instance, update the apt repository and install the latest version of Docker.

sudo apt-get update
sudo apt install docker.io

Run the DumpItForLinux commands using docker with --dump-it and --action upload-az with your Azure Storage credentials and bucket name.

sudo docker run --privileged comaeio/dumpit-linux --dump-it --action upload-az --bucket [BUCKET_NAME] --az-account-name [STORAGE_ACCOUNT_NAME] --az-account-key [STORAGE-ACCOUNT_KEY]

DumpItForLinux will upload the full memory image data to your Azure Storage bucket.

DumpItForLinux

To upload the snapshot of the memory to Azure Storage, replace the --dump-it flag with --snap-it using the same docker command.

Amazon Web Services S3

Log in to your AWS account and in IAM > Users page, add AmazonS3FullAccess policy in the Permissions tab. You also need the user’s Access Key Id and Access Key Secret. You can create these credentials in the Security credentials tab if you haven’t done yet. A bucket is also required, you can use your existing bucket or create a new one in your S3. Just make sure the bucket exists before running the DumpItForLinux command.

Inside your Ubuntu instance, update the apt repository and install the latest version of Docker.

sudo apt-get update
sudo apt install docker.io

Run the DumpItForLinux commands using docker with --dump-it and --action upload-s3 with your AWS User credentials and bucket name.

sudo docker run --privileged comaeio/dumpit-linux --dump-it --action upload-s3 --bucket [BUCKET_NAME] --aws-access-id [ACCESS_KEY_ID] --aws-access-secret [ACCESS_KEY_SECRET]

DumpItForLinux will upload the full memory image data to your AWS S3 bucket.

DumpItForLinux

To upload the snapshot of the memory to AWS S3, replace the --dump-it flag with --snap-it using the same docker command.

The dump file and snapshot metadata will be saved locally by default. You can also explicitly provide the --action store flag in the docker command to do the same thing.