Search:

Chapter 3

Linux

Because Windows incident response and memory analysis is good… But what about Linux!?!?

Store Memory Snapshot

Supported Linux Operating Systems

  • Ubuntu (Generic, Google Cloud, Microsoft Azure, Amazon Aws)
  • RedHat Linux
  • Google Container Optimized OS
  • Amazon Linux

Local

Update the apt repository and install the latest version of Docker to be able to run the free containerized version of DumpItForLinux.

sudo apt-get update
sudo apt install docker.io

If you are using CentOS/RHEL, we invite you to consult Docker documentation.

Map the current directory to the container WORKDIR and create a memory snapshot:

COMAE_WORKDIR=$(sudo docker inspect -f '{{.Config.WorkingDir}}' comaeio/dumpit-linux)
sudo docker run --privileged -v $(pwd):$COMAE_WORKDIR comaeio/dumpit-linux --dump-it --action store

This will save the archive in the current directory.

Send to Comae Stardust

Update the apt repository and install the latest version of Docker to be able to run the free containerized version of DumpItForLinux.

sudo apt-get update
sudo apt install docker.io

Client ID and Secret ID can be found when you log in into your Stardust account under Settings > Integrations.

Send Memory Snapshot to Comae Stardust

Run the DumpItForLinux command using docker with --snap-it and --action upload-comae flags with your Comae Stardust credentials.

sudo docker run --privileged comaeio/dumpit-linux --snap-it --comae-client-id <Client ID> --comae-client-secret <Secret ID> --action upload-comae

DumpItForLinux will send the pre-processed data to Comae Stardust.

DumpItForLinux

Send Full Memory Image to Comae Stardust

Run the DumpItForLinux command using docker with “–dump-it” and “–action upload-comae” flags.

sudo docker run --privileged comaeio/dumpit-linux --dump-it --comae-client-id <Client ID> --comae-client-secret <Secret ID> --action upload-comae

DumpItForLinux will send a full memory image to Comae Stardust.

DumpItForLinux

Google Cloud Platform

Getting started with GCP

A bucket should be created in your GCP Storage before running Docker command for DumpItForLinux.

To be able to interact with the Google Cloud Platform through DumpItForLinux, you will need a service account and a credential file in JSON format. Please check the official documentation for service accounts and credential files in this link: https://cloud.google.com/iam/docs/creating-managing-service-account-keys You can optionally generate and download the credential file using gcloud CLI commands. Inside the CLI, log in to your GCP account.

gcloud auth login

You will be prompted with a link to authenticate you as a GCP user. Open that link, login with your GCP account and copy the code provided. Paste it in the console to finish the authentication process.

DumpItForLinux

Set the GCP project you are working on by using the following command.

gcloud config set project [PROJECT_ID]

Create a service account.

gcloud iam service-accounts create [YOUR_SERVICE_ACCOUNT_NAME]
gcloud projects add-iam-policy-binding [PROJECT_ID] --member “serviceAccount: [YOUR_SERVICE_ACCOUNT_NAME]@[PROJECT_ID].iam.gserviceaccount.com” --role "roles/owner"

Create a service account key.

gcloud iam service-accounts keys create /tmp/[FILE_NAME].json --iam-account [YOUR_SERVICEACCOUNT_NAME]@[PROJECT_ID].iam.gserviceaccount.com

DumpItForLinux

Install the latest version of Docker to be able to run the free containerized version of DumpItForLinux.

sudo apt install docker.io

Run the DumpItForLinux commands using docker with “–snap-it” and “–action upload-gcp” flag. You need to provide the path to the json file that contains your service account key and the bucket name.

sudo docker run -v /tmp/[FILE_NAME].json:/tmp/[FILE_NAME].json --privileged comaeio/dumpit-linux --snap-it --action upload-gcp --gcp-creds-file /tmp/[FILE_NAME].json --bucket [BUCKET_NAME]

DumpItForLinux will upload the preprocessed data to your specified GCP Storage bucket.

DumpItForLinux DumpItForLinux

To upload a full memory image to GCP Storage, replace the --snap-it flag with --dump-it using the same docker command.

Microsoft Azure

You will need your Storage account’s Storage Account Name and Storage Account Key. Both can be found when you log in to your Azure account in Storage accounts > [Your-Storage-Account] > Access Keys.

Inside your Linux instance, update the apt repository and install the latest version of Docker.

sudo apt-get update
sudo apt install docker.io

Run the DumpItForLinux commands using docker with --dump-it and --action upload-az with your Azure Storage credentials and bucket name.

sudo docker run --privileged comaeio/dumpit-linux --dump-it --action upload-az --bucket [BUCKET_NAME] --az-account-name [STORAGE_ACCOUNT_NAME] --az-account-key [STORAGE-ACCOUNT_KEY]

DumpItForLinux will upload the full memory image data to your Azure Storage bucket.

DumpItForLinux

To upload the snapshot of the memory to Azure Storage, replace the --dump-it flag with --snap-it using the same docker command.

Amazon Web Services S3

Log in to your AWS account and in IAM > Users page, add AmazonS3FullAccess policy in the Permissions tab. You also need the user’s Access Key Id and Access Key Secret. You can create these credentials in the Security credentials tab if you haven’t done yet. A bucket is also required, you can use your existing bucket or create a new one in your S3. Just make sure the bucket exists before running the DumpItForLinux command.

Inside your Ubuntu instance, update the apt repository and install the latest version of Docker.

sudo apt-get update
sudo apt install docker.io

Run the DumpItForLinux commands using docker with --dump-it and --action upload-s3 with your AWS User credentials and bucket name.

sudo docker run --privileged comaeio/dumpit-linux --dump-it --action upload-s3 --bucket [BUCKET_NAME] --aws-access-id [ACCESS_KEY_ID] --aws-access-secret [ACCESS_KEY_SECRET]

DumpItForLinux will upload the full memory image data to your AWS S3 bucket.

DumpItForLinux

To upload the snapshot of the memory to AWS S3, replace the --dump-it flag with --snap-it using the same docker command.

The dump file and snapshot metadata will be saved locally by default. You can also explicitly provide the --action store flag in the docker command to do the same thing.

Getting Started

Commands

Note that the --comae-hostname parameter is only mandatory for hosted clusters, the default api endpoint is api.comae.com

$ python .\comae.py
[COMAE] No action provided. Please provide an action.
usage: comae.py [-h] [-k] [-d] [-s] [--action ACTION] [--file-url FILE_URL] [--file-local FILE_LOCAL] [--bucket BUCKET] [--comae-case-id COMAE_CASE_ID] [--comae-client-id COMAE_CLIENT_ID]
                [--comae-client-secret COMAE_CLIENT_SECRET] [--comae-hostname COMAE_HOSTNAME] [--list-organizations] [--list-cases] [--gcp-creds-file GCP_CREDS_FILE] [--az-account-name AZ_ACCOUNT_NAME]     
                [--az-account-key AZ_ACCOUNT_KEY] [--aws-access-id AWS_ACCESS_ID] [--aws-access-secret AWS_ACCESS_SECRET]

Comae Stardust Client

optional arguments:
  -h, --help            show this help message and exit
  -k, --get-api-key     Get Comae Stardust API Key
  -d, --dump-it         Dump with Comae DumpIt and send to Comae Stardust
  -s, --snap-it         Dump Mem2Json and send to Comae Stardust
  --action ACTION       One of "store", "upload-comae", "upload-gcp", "upload-az", "upload-s3"
  --file-url FILE_URL   URL of a dump/snapshot file. The tool will not upload the local file if it is specified.
  --file-local FILE_LOCAL
                        URL of a dump/snapshot file. The tool will not upload the local file if it is specified.
  --bucket BUCKET       Name of bucket to use if uploading to GCP / Azure / S3
  --comae-case-id COMAE_CASE_ID
                        Comae Case ID if uploading to Comae Stardust
  --comae-client-id COMAE_CLIENT_ID
                        Comae Client ID if uploading to Comae Stardust
  --comae-client-secret COMAE_CLIENT_SECRET
                        Comae Client Secret if uploading to Comae Stardust
  --comae-hostname COMAE_HOSTNAME
                        Comae Client Secret if uploading to Comae Stardust
  --list-organizations  List oranizations for the account
  --list-cases          List cases of all the orgs
  --gcp-creds-file GCP_CREDS_FILE
                        Path to file containing GCP credentials, if uploading to GCP
  --az-account-name AZ_ACCOUNT_NAME
                        Account name if uploading to Azure
  --az-account-key AZ_ACCOUNT_KEY
                        Account key if uploading to Azure
  --aws-access-id AWS_ACCESS_ID
                        AWS access key ID
  --aws-access-secret AWS_ACCESS_SECRET
                        AWS access key secret

Retrieve information

Get the list of organizations

$ python .\comae.py --comae-client-id XXXXXXXXXXXXXXXXXXXXXXXXXXXXX --comae-client-secret YYYYYYYYYYYYYYYYYYYYYYYY --comae-hostname api.comae.com --list-organizations
[COMAE] Requesting Comae Stardust API key....
     Organization Id           Name
     ---------------           ----
     ffff58af916ac0001d4027d9  Comae Response

Get the list of cases

$ python .\comae.py --comae-client-id XXXXXXXXXXXXXXXXXXXXXXXXXXXXX --comae-client-secret YYYYYYYYYYYYYYYYYYYYYYYY --comae-hostname api.comae.com --list-cases
[COMAE] Requesting Comae Stardust API key....
     organizationId           _id                      name          description             creationDate             lastModificationDate     labels
     --------------           ---                      ----          -----------             ------------             --------------------     ------
     ffff2a9e9fcc6ffff1b631bb ffff69e61eac0ffffd4fcd27 TestCase      Hello                   2020-11-20T07:01:58.543Z 2020-11-20T07:01:58.543Z demo
     ffff2a9e9fcc6ffff1b631bb ffff6ad0b1f656001ef0e2e6 TestCase2     Description2            2020-11-20T07:05:52.495Z 2020-11-20T07:05:52.495Z workflow:state="complete"
     ffff2a9e9fcc6ffff1b631bb ffff7091ac3d30001d11f19a Demomaker     A bunch of random dumps 2020-11-20T07:30:25.246Z 2020-11-20T07:30:25.246Z iep2-policy:tlp="amber", ifx-vetting:vetted="legit-uncertain", 
priority-level:high, event-classification:event-class="general", workflow:state="ongoing"
     ffff2a9e9fcc6ffff1b631bb ffff87a057ec85001ed7a7bf Test3         Untitled                2020-11-20T09:08:48.546Z 2020-11-20T09:08:48.546Z workflow:state="incomplete"
     ffff2a9e9fcc6ffff1b631bb ffff9a08ac3d30001d11f1a2 NewOmega1     Untitled                2020-11-20T10:27:20.991Z 2020-11-20T10:27:20.991Z workflow:todo="add-tagging"
     ffff2a9e9fcc6ffff1b631bb ffff9cb0ac3d30001d11f1a4 Case123       Untitled                2020-11-20T10:38:40.916Z 2020-11-20T10:38:40.916Z
     ffff2a9e9fcc6ffff1b631bb ffffc24b1306f8001c2ab845 Untitled Case Untitled Case           2020-11-20T13:19:07.312Z 2020-11-20T13:19:07.312Z

Send a local file

If you use --dump-it or --snap-it without specifying a local file with the --file-local the cli will automatically collect and image to send, otherwise use the --file-local parameter to send an image dump (--dump-it) or a snapshot (--snap-it).

$ python .\comae.py --comae-client-id XXXXXXXXXXXXXXXXXXXXXXXXXXXXX --comae-client-secret YYYYYYYYYYYYYYYYYYYYYYYY --comae-hostname api.comae.com --action upload-comae --comae-case-id ffff7091ac3d30001d11f19a --file-local D:\Dumps\NVIDIARTX.dmp.zip --dump-it
[COMAE] Acquiring the memory image with Comae DumpIt...
[COMAE] Requesting Comae Stardust API key....
[COMAE] Requesting Comae Stardust API key....
[COMAE] Uploading file to Comae

[COMAE] Upload complete!
[COMAE] Uploaded to Comae Stardust