CDF – Crypto Differential Fuzzing

CDF is a tool to automatically test the correctness and security of cryptographic software. CDF can detect implementation errors, compliance failures, side-channel leaks, and so on.CDF implements a combination of unit tests with “differential fuzzing", an approach that compares the behavior of different implementations of the same primitives when fed edge cases and values maximizing the code coverage.Unlike general-purpose fuzzers and testing software, CDF is: Smart: CDF knows what kind of algorithm it’s testing and adapts to the tested functions Fast: CDF tests only what needs to be tested and parallelizes its tests as much as possible Polyvalent: CDF isn’t specific to any language or API, but supports arbitrary executable programs or scripts Portable: CDF will run on any Unix or Windows platform, since it is written in Go without any platform-specific dependencyThe purpose of CDF is to provide more efficient testing tool to developers and security researchers, being more effective than test vectors and cheaper than manual audit of formal verification.CDF was first presented at Black Hat USA 2017. You can view the slides of our presentation, which contain general information about the rationale behind and the design of CDF.RequirementsCDF is coded in Go, the current version has been developed using Go 1.8. It has no dependencies outside of Go’s standard library.However, we provide example programs to be tested using CDF, which are in C, Python, C++, Java and Go and require specific crypto libraries to be run. Currently required libraries are:CryptoPPOpenSSLBouncyCastlePyCryptoCryptography.ioBuildmake will build the cdf binary.A bunch of example programs are available under example: make examples-all will build all the examples, while make examples-go will only build the Go examples.make test will run unit tests (of CDF).UsageFor starters you may want to view usage info by running cdf -h.You may then try an example such as the rsaenc interface against the RSA OAEP Go and CryptoPP examples. Viewing CryptoPP as our reference, you can test the Go implementation by doing:cdf rsaenc /examples/oaep_rsa2048_go /examples/oaep_rsa2048_cryptoppThis command will perform various tests specific to the rsaenc interface.In this example, CDF should complain about the maximum public exponent size the Go implementation support: if we check its code we can see the public exponent is being stored as a normal integer, whereas in CryptoPP (and most other implementations), it is stored as a big integer. This is however by design and will likely not be changed.Parameters are defined in config.json. Most parameters are self-explanatory. You may want to set others private keys for rsaenc and ecdsa (these interfaces are tested with fixed keys, although some key parameters, such as the exponents, are changed in some of the tests).The seed parameter lets you change the seed used in CDF’s pseudo-random generators. (Yet, the tested program may be using some PRNG seeded otherwise, like the OAEP examples.) The concurrency parameter lets you set the number of concurrent goroutine CDF should be spawning when forking the programs. Note that it is best to keep this number below the real number of cores. The verboseLog parameter, if set to true, will write all programs’ inputs and outputs, even for the succesful tests, to a file log.txt.InterfacesIn order to test your software using CDF, you have to create a program that reads input and writes output in conformance with CDF interfaces, and that internally calls the tested program. CDF interfaces are abstractions of a crypto functionality, in order to allow black-box testing of arbitrary implementations.For example, if you implemented the ECDSA signature scheme, your program should satisfies the ecdsa interface and as such take as inputs 4 or 5 arguments, respectively in order to sign a message or verify a signature. These arguments are the public X coordinate, the public Y coordinate, the private D big integer and the message you want to sign and then it should output only the big integers R and S each on a newline. Or, to verify a message, it should accept X,Y, the R, the S and the message and then it should only output True or False. The interfaces’ specifications are detailled below.Our examples of interface implementations will help you create your owns.Error handling is left to the tested program, however to have meaningful errors in CDF it is best to exit on failure, return a error code and print an error message.The interface program can be written in any language, it just needs to be an executable file conformant with a CDF interface. An interface program is typically written in the same language as the tested program, but that’s not mandatory (it may be a wrapper in another language, for example for Java programs).CDF currently supports the following interfaces, wherein parameters are encoded as hexadecimal ASCII strings, unless described otherwise:dsaThe dsa interface tests implementations of the Digital Signature Algorithm (DSA). It must support the signature and verification operations: Operation Input Output Signature p q g y x m r s Verification p q g y r s m truth value Here p, q, g are DSA parameters, y is a public key, x is a private key, m is a message, r and s form the signature, which must returned separated by a newline. The truth value, either “true” or “false”, is represented as a string.The dsa interface supports an optional test: the-h allows to bypass the hashing process and directly provide the hash value to be signed. This allows CDF to perform more tests, such as checking for overflows or hash truncation.ecdsaThe ecdsa interface tests implementations of the Elliptic Curve Digital Signature Algorithm (ECDSA). It must support the signature and verification operations: Operation Input Output Signature x y d m r s Verification x y r s m truth value Here x and y are a public ECDSA key coordinates, d is a private key, m is a message, and r and s form the signature, which must be returned separated by a newline. The truth value, either “true” or “false”, is represented by a string.The flag -h serves the same purpose as with dsa.Please note that our current design assumes a fixed curve, defined in the tested program.To obtain reproducible results with those tests and leverage all of CDF detection’s abilities, you have to either seed you random generator with a fixed seed or use a deterministic ECDSA variant, otherwise CDF can’t detect problems such as same tags issues automatically.encThe enc interface tests symmetric encryption and decryption operations, typically when performed with a block cipher (stream ciphers can be tested with the prf interface). It must support encryption and decryption: Operation Input Output Encryption k m c Decryption k c r Here k is a key, m is a message, c is a ciphertext c and r is a recovered plaintext.prfThe prf interface tests keyed hashing (pseudorandom functions, MACs), as well as stream ciphers: Operation Input Output Computation k m h Here k is a key, m is a message (or nonce in case of a stream cipher), and h is the result of the PRF computation. Our interface assumes fixed key size and variable input lengths. If a specific key is to be specified, it is the responsibility of the tested program to ignore the key input or the xof interface may be a better choice.rsaencThe rsaenc tests RSA encryption and decryption, both OAEP (PKCS 2.1) and PKCS 1.5: Operation Input Output Encryption n e m c Decryption p q e d c r Here n is a modulus, e is a public exponent (for compatibility with certain libraries, e is also needed for decryption), m is a message m, p and q are n’s factor (such that p > q, since libraries commonly require it), d is a private exponent, and r is a recovered plaintext.xofThe xof interface tests hash functions, extendable-output functions (XOFs), deterministic random bit generators (DRBGs): Operation Input Output Computation m h Here m is the message and h is the result h.AuthorsCDF is based on initial ideas by JP Aumasson, first disclosed at WarCon 2016, and most of the code was written by Yolan Romailler.Download CDF

Link: http://www.kitploit.com/2019/02/cdf-crypto-differential-fuzzing.html

Justniffer – Network TCP Packet Sniffer

Justniffer is a network protocol analyzer that captures network traffic and produces logs in a customized way, can emulate Apache web server log files, track response times and extract all “intercepted" files from the HTTP traffic.It lets you interactively trace tcp traffic from a live network or from a previously saved capture file. Justniffer’s native capture file format is libpcap format, which is also the format used by tcpdump and various other tools.Reliable TCP Flow RebuildingThe main Justniffer’s feature is the ability to handle all those complex low level protocol issues and retrieve the correct flow of the TCP/IP traffic: IP fragmentation, TCP retransmission, reordering. etc. It uses portions of Linux kernel source code for handling all TCP/IP stuff. Precisely, it uses a slightly modified version of the libnids libraries that already include a modified version of Linux code in a more reusable way.Optimized for "Request / Response" protocols. It is able to track server response timeJustniffer was born as tool for helping in analyzing performance problem in complex network environment when it becomes impractical to analyze network captures solely using low level packet sniffers (wireshark , tcpdump, etc.) . It will help you to quickly identify the most significant bottlenecks analyzing the performance at "application" protocol level.In very complex and distributed systems is often useful to understand how communication takes place between different components, and when this is implemented as a network protocol based on TCP/IP (HTTP, JDBC, RTSP, SIP, SMTP, IMAP, POP, LDAP, REST, XML-RPC, IIOP, SOAP, etc.), justniffer comes in handy. Often the logging level and monitoring systems of these systems does not report important information to determine performance issues such as the response time of each network request. Because they are in a "production" environment and cannot be too much verbose or they are in-house developed applications and do not provide such logging.Other times it is desirable to collect access logs from web services implemented on different environments (various web servers, application servers, python web frameworks, etc.) or web services that are not accessible and therefore traceable only on client side.Justniffer can capture traffic in promiscuous mode so it can be installed on dedicated and independent station within the same network "collision domain" of the gateway of the systems that must be analyzed, collecting all traffic without affecting the system performances and requiring invasive installation of new software in production environments.Can rebuild and save HTTP content on filesThe robust implementation for the reconstruction of the TCP flow turns it in a multipurpose sniffer.HTTP snifferLDAP snifferSMTP snifferSIP snifferpassword snifferjustniffer can also be used to retrieve files sent over the network.It is extensibleCan be extended by external scripts. A python script has been developed to recover all files sent via HTTP (images, text, html, javascript, etc.).Features Summary Reliable TCP flow rebuilding: it can reorder, reassemble tcp segments and ip fragments using portions of the Linux kernel code Logging text mode can be customized Extensibility by any executable, such as bash, python, perl scripts, ELF executable, etc. Performance measurement it can collect many information on performances: connection time, close time, request time , response time, close time, etc.Download Justniffer

Link: http://feedproxy.google.com/~r/PentestTools/~3/ZeOTT8XrMaE/justniffer-network-tcp-packet-sniffer.html

Volatility Workbench – A GUI For Volatility Memory Forensics

Volatility Workbench is a graphical user interface (GUI) for the Volatility tool. Volatility is a command line memory analysis and forensics tool for extracting artifacts from memory dumps. Volatility Workbench is free, open source and runs in Windows. It provides a number of advantages over the command line version including:No need of remembering command line parameters.Storage of the operating system profile, KDBG address and process list with the memory dump, in a .CFG file. When a memory image is re-loaded, this saves a lot of time and avoids the frustration of not knowing the correct profile to select.Simpler copy & paste.Simpler printing of paper copies (via right click).Simpler saving of the dumped information to a file on disk.A drop down list of available commands and a short description of what the command does.Time stamping of the commands executed.Auto-loading the first dump file found in the current folder.Support for analysing Mac and Linux memory dumps.Download Volatility Workbench

Link: http://feedproxy.google.com/~r/PentestTools/~3/OzWarBRi5YU/volatility-workbench-gui-for-volatility.html

HTTrack Website Copier – Web Crawler And Offline Browser

HTTrack allows you to download a World Wide Web site from the Internet to a local directory, building recursively all directories, getting HTML, images, and other files from the server to your computer. HTTrack arranges the original site’s relative link-structure. Simply open a page of the “mirrored" website in your browser, and you can browse the site from link to link, as if you were viewing it online. HTTrack can also update an existing mirrored site, and resume interrupted downloads. HTTrack is fully configurable, and has an integrated help system.WinHTTrack is the Windows (from Windows 2000 to Windows 10 and above) release of HTTrack, and WebHTTrack the Linux/Unix/BSD release.Download HTTrack Website Copier

Link: http://feedproxy.google.com/~r/PentestTools/~3/-iUl75kJzG4/httrack-website-copier-web-crawler-and.html

OSFMount – Mount Disk Images & Create RAM Drives

OSFMount allows you to mount local disk image files (bit-for-bit copies of a disk partition) in Windows with a drive letter. You can then analyze the disk image file with PassMark OSForensics™ by using the mounted volume’s drive letter. By default, the image files are mounted as read only so that the original image files are not altered. OSFMount also supports the creation of RAM disks, basically a disk mounted into RAM. This generally has a large speed benefit over using a hard disk. As such this is useful with applications requiring high speed disk access, such a database applications, games (such as game cache files) and browsers (cache files). A second benefit is security, as the disk contents are not stored on a physical hard disk (but rather in RAM) and on system shutdown the disk contents are not persistent. At the time of writing, we believe this is the fastest RAM drive software available. OSFMount supports mounting images of CDs in .ISO format , which can be useful when a particular CD is used often and the speed of access is important.Download OSFMount

Link: http://feedproxy.google.com/~r/PentestTools/~3/b1UlY7C2tko/osfmount-mount-disk-images-create-ram.html

Process Hacker – A Free, Powerful, Multi-Purpose Tool That Helps You Monitor System Resources, Debug Software And Detect Malware

A free, powerful, multi-purpose tool that helps you monitor system resources, debug software and detect malware.System requirementsWindows 7 or higher, 32-bit or 64-bit.FeaturesA detailed overview of system activity with highlighting.Graphs and statistics allow you quickly to track down resource hogs and runaway processes.Can’t edit or delete a file? Discover which processes are using that file.See what programs have active network connections, and close them if necessary.Get real-time information on disk access.View detailed stack traces with kernel-mode, WOW64 and .NET support.Go beyond services.msc: create, edit and control services.Small, portable and no installation required.100% Free Software (GPL v3)Building the projectRequires Visual Studio (2017 or later).Execute build_release.cmd located in the build directory to compile the project or load the ProcessHacker.sln and Plugins.sln solutions if you prefer building the project using Visual Studio.You can download the free Visual Studio Community Edition to build, run or develop Process Hacker.Additional informationYou cannot run the 32-bit version of Process Hacker on a 64-bit system and expect it to work correctly, unlike other programs.Enhancements/BugsPlease use the GitHub issue tracker for reporting problems or suggesting new features.SettingsIf you are running Process Hacker from a USB drive, you may want to save Process Hacker’s settings there as well. To do this, create a blank file named “ProcessHacker.exe.settings.xml" in the same directory as ProcessHacker.exe. You can do this using Windows Explorer:Make sure "Hide extensions for known file types" is unticked in Tools > Folder options > View.Right-click in the folder and choose New > Text Document.Rename the file to ProcessHacker.exe.settings.xml (delete the ".txt" extension).PluginsPlugins can be configured from Hacker > Plugins.If you experience any crashes involving plugins, make sure they are up to date.Disk and Network information provided by the ExtendedTools plugin is only available when running Process Hacker with administrative rights.KProcessHackerProcess Hacker uses a kernel-mode driver, KProcessHacker, to assist with certain functionality. This includes:Capturing kernel-mode stack tracesMore efficiently enumerating process handlesRetrieving names for file handlesRetrieving names for EtwRegistration objectsSetting handle attributesNote that by default, KProcessHacker only allows connections from processes with administrative privileges (SeDebugPrivilege). To allow Process Hacker to show details for all processes when it is not running as administrator:In Registry Editor, navigate to: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\KProcessHacker3Under this key, create a key named Parameters if it does not exist.Create a DWORD value named SecurityLevel and set it to 2. If you are not using an official build, you may need to set it to 0 instead.Restart the KProcessHacker3 service (sc stop KProcessHacker3, sc start KProcessHacker3).Download Processhacker

Link: http://feedproxy.google.com/~r/PentestTools/~3/nL_bfHHeQgA/process-hacker-free-powerful-multi.html

CANalyzat0r – Security Analysis Toolkit For Proprietary Car Protocols

This software project is a result of a Bachelor’s thesis created at SCHUTZWERK in collaboration with Aalen University by Philipp Schmied.Please refer to the corresponding blog post for more information.Why another CAN tool?Built from scratch with new ideas for analysis mechanismsBundles features of many other tools in one placeModular and extensible: Read the docs and implement your own analysis mechanismsComfortable analysis using a GUIManage work in separate projects using a databaseDocumentation: Read the docs if you need a manual or technical info.Installing and running:Run sudo ./install_requirements.sh along with sudo -E ./CANalyzat0r.sh. This will create a folder called pipenv with a pipenv environment in it.Or just use the docker version which is recommended at this time (Check the README.md file in the subdirectory)For more information, read the HTML or PDF version of the documentation in the ./doc/build folder.FeaturesManage interface configuration (automatic loading of kernel modules, manage physical and virtual SocketCAN devices)Multi interface supportManage your work in projects. You can also import and export them in the human readable/editable JSON formatLogging of all actionsGraphical sniffingManage findings, dumps and known packets per project Easy copy and paste between tabs. Also, you can just paste your SocketCAN files into a table that allows pasting Threaded Sending, Fuzzing and Sniffing Sniffing at the same timeAdd multiple analyzing threads on the GUIIgnore packets when sniffing – Automatically filter unique packets by ID or data and IDCompare dumpsAllows setting up complex setups using only one windowClean organization in tabs for each analysis taskBinary packet filtering with randomizationSearch for action specific packets using background noise filteringSQLite supportFuzz and change the values on the flyTesting ItYou can use the Instrument Cluster Simulator in order to tinker with a virtual CAN bus without having to attach real CAN devices to your machine.TroubleshootingEmpty GUI WindowsPlease make sure that the QT_X11_NO_MITSHM environment variable is set to 1. When using sudo, please include the -E option in order to preserve this environment variable as follows: sudo -E ./CANalyzat0r.sh.Fixing the GUI styleThis application has to be run as superuser. Because of a missing configuration, the displayed style can be set to an unwanted value when the effective UID is 0. To fix this behaviour, follow these steps:Quick way: Execute echo “[QT]\nstyle=CleanLooks" >> ~/.config/Trolltech.conf Alternative way: Install qt4-qtconfig: sudo apt-get install qt4-qtconfigRun qtconfig-qt4 as superuser and change the GUI style to CleanLooks or GTK+Or use the docker containerDownload CANalyzat0r

Link: http://feedproxy.google.com/~r/PentestTools/~3/KPeA8qxDNEk/canalyzat0r-security-analysis-toolkit.html

DFIRTrack – The Incident Response Tracking Application

DFIRTrack (Digital Forensics and Incident Response Tracking application) is an open source web application mainly based on Django using a PostgreSQL database backend.In contrast to other great incident response tools, which are mainly case-based and support the work of CERTs, SOCs etc. in their daily business, DFIRTrack is focused on handling one major incident with a lot of affected systems as it is often observed in APT cases. It is meant to be used as a tool for dedicated incident response teams in large cases. So, of course, CERTs and SOCs may use DFIRTrack as well, but they may feel it will be more appropriate in special cases instead of every day work.In contrast to case-based applications, DFIRTrack works in a system-based fashion. It keeps track of the status of various systems and the tasks associated with them, keeping the analyst well-informed about the status and number of affected systems at any time during the investigation phase up to the remediation phase of the incident response process.FeaturesOne focus is the fast and reliable import and export of systems and associated information. The goal for importing systems is to provide a fast and error-free procedure. Moreover, the goal for exporting systems and their status is to have multiple instances of documentation: for instance, detailed Markdown reports for technical staff vs. spreadsheets for non-technical audiences without redundancies and deviations in the data sets. A manager whose numbers match is a happy manager! ;-)The following functions are implemented for now:ImporterCreator (fast creation of multiple related instances via web interface) for systems and tasks,CSV (simple and generic CSV based import (either hostname and IP or hostname and tags combined with a web form), should fit for the export capabilities of many tools),Markdown for entries (one entry per system(report)).ExporterMarkdown for so-called system reports (for use in a MkDocs structure),Spreadsheet (CSV and XLS),LaTeX (planned).Installation and dependenciesDFIRTrack is developed for deploying on Debian Stretch or Ubuntu 16.04. Other Debian based distributions or versions may work but were not tested yet. At the moment the project will be focused on Ubuntu LTS and Debian releases.For fast and uncomplicated installation on a dedicated server including all dependencies an Ansible playbook and role was written (available here). For testing a docker environment was prepared (see below).For a minimal setup the following dependencies are needed:django (2.0),django_q,djangorestframework,gunicorn,postgresql,psycopg2-binary,python3-pip,PyYAML,requests,virtualenv,xlwt.Note that there is no settings.py in this repository. This file is submitted via Ansible or has to be copied and configured by hand. That will be changed in the future (see issues for more information).Docker EnvironmentAn experimental Docker Compose environment for local-only usage is provided in this project. Run the following command in the project root directory to start the environment:docker-compose upA user admin is already created. A password can be set with:docker/setup_admin.shThe application is located at localhost:8000.Built-in softwareThe application was created by implementing the following libraries and code:Bootstrapclipboard.jsDataTablesjQueryOpen IconicPopper.jsDevelopmentThere are two main branches:masterdevelopmentThe master branch should be stable (as you can expect from an alpha version). New features and changes are added to the development branch and merged into master from time to time. Everything merged into development should run too but might need manual changes (e. g. config). Devolopment branch of DFIRTrack Ansible should follow these changes. So if you want to see the latest features and progress: “check out" development.DisclaimerThis software is in an early alpha phase so a lot of work has to be done. Even if some basic error checking is implemented, as of now the usage of DFIRTrack mainly depends on proper handling.DFIRTrack was not and most likely will never be intended for usage on publicly available servers. Nevertheless some basic security features were implemented (in particular in connection with the corresponding ansible role) always install DFIRTrack in a secured environment (e. g. a dedicated virtual machine or in a separated network)!Download Dfirtrack

Link: http://feedproxy.google.com/~r/PentestTools/~3/vHFBZOQWsMA/dfirtrack-incident-response-tracking.html

RedELK – Easy Deployable Tool For Red Teams Used For Tracking And Alarming About Blue Team Activities As Well As Better Usability In Long Term Operations

Red Team’s SIEM – easy deployable tool for Red Teams used for tracking and alarming about Blue Team activities as well as better usability for the Red Team in long term operations.Initial public release at BruCON 2018:Video: https://www.youtube.com/watch?v=OjtftdPts4gPresentation slides: https://github.com/outflanknl/Presentations/blob/master/MirrorOnTheWall_BruCon2018_UsingBlueTeamTechniquesinRedTeamOps_Bergman-Smeets_FINAL.pdfGoal of the projectShort: a Red Team’s SIEM.Longer: a Red Team’s SIEM that serves three goals:Enhanced usability and overview for the red team operators by creating a central location where all relevant operational logs from multiple teamservers are collected and enriched. This is great for historic searching within the operation as well as giving a read-only view on the operation (e.g. for the White Team). Especially useful for multi-scenario, multi-teamserver, multi-member and multi-month operations. Also, super easy ways for viewing all screenshots, IOCs, keystrokes output, etc. \o/Spot the Blue Team by having a central location where all traffic logs from redirectors are collected and enriched. Using specific queries its now possible to detect that the Blue Team is investigating your infrastructure.Out-of-the-box usable by being easy to install and deploy, as well as having ready made views, dashboards and alarms.Here’s a conceptual overview of how RedELK works.RedELK uses the typical components Filebeat (shipping), Logstash (filtering), Elasticsearch (storage) and Kibana (viewing). Rsync is used for a second syncing of teamserver data: logs, keystrokes, screenshots, etc. Nginx is used for authentication to Kibana, as well as serving the screenshots, beaconlogs, keystrokes in an easy way in the operator’s browser.A set of python scripts are used for heavy enriching of the log data, and for for Blue Team detection.Supported tech and requirementsRedELK currently supports:Cobalt Strike teamserversHAProxy for HTTP redirector data. Apache support is expected soon.Tested on Ubuntu 16 LTSRedELK requires a modification to the default haproxy configuration in order to log more details.In the ‘general’ section:log-format frontend:%f/%H/%fi:%fp\ backend:%b\ client:%ci:%cp\ GMT:%T\ useragent:%[capture.req.hdr(1)]\ body:%[capture.req.hdr(0)]\ request:%rAt ‘frontend’ section:declare capture request len 40000http-request capture req.body id 0capture request header User-Agent len 512InstallationFirst time installationAdjust ./certs/config.cnf to include the right details for the TLS certificates. Once done, run: initial-setup.sh This will create a CA, generate necessary certificates for secure communication between redirs, teamserver and elkserver and generates a SSH keypair for secure rsync authentication of the elkserver to the teamserver. It also generates teamservers.tgz, redirs.tgz and elkserver.tgz that contain the installation packages for each component. Rerunning this initial setup is not required. But if you want new certificates for a new operation, you can simply run this again.Installation of redirectorsCopy and extract redirs.tgz on your redirector as part of your red team infra deployment procedures. Run: install-redir.sh $FilebeatID $ScenarioName $IP/DNS:PORT$FilebeatID is the identifier of this redirector within filebeat.$ScenarioName is the name of the attack scenario this redirector is used for.$IP/DNS:PORT is the IP or DNS name and port where filebeat logs are shipped to.This script will set the timezone (default Europe/Amsterdam), install filebeat and dependencies, install required certificates, adjust the filebeat configuration and start filebeat.Installation of teamserverCopy and extract teamservers.tgz on your Cobalt Strike teamserver as part of your red team infra deployment procedures. Run: install-teamserver.sh $FilebeatID $ScenarioName $IP/DNS:PORT$FilebeatID is the identifier of this teamserver within filebeat.$ScenarioName is the name of the attack scenario this teamserver is used for.$IP/DNS:PORT is the IP or DNS name and port where filebeat logs are shipped to.This script will warn if filebeat is already installed (important as ELK and filebeat sometimes are very picky about having equal versions), set the timezone (default Europe/Amsterdam), install filebeat and dependencies, install required certificates, adjust the filebeat configuration, start filebeat, create a local user ‘scponly’ and limit that user to SSH key-based auth via scp/sftp/rsync.Installation of ELK serverCopy and extract elkserver.tgz on your RedELK server as part of your red team infra deployment procedures. Run: install-teamserver.sh This script will set the timezone (default Europe/Amsterdam), install logstash, elasticsearch, kibana and dependencies, install required certificates, deploy the logstash configuration and required custom ruby enrichment scripts, download GeoIP databases, install Nginx, configure Nginx, create a local user ‘redelk’ with the earlier generated SSH keys, install the script for rsyncing of remote logs on teamservers, install the script used for creating of thumbnails of screenshots, install the RedELK configuration files, install crontab file for RedELK tasks, install GeoIP elasticsearch plugins and adjust the template, install the python enrichment scripts, and finally install the python blue team detection scripts.You are not done yet. You need to manually enter the details of your teamservers in /etc/cron.d/redelk, as well as tune the config files in /etc/redelk (see section below).Setting up enrichment and detectionOn the ELK server in the /etc/redelk directory you can find several files that you can use to tune your RedELK instance for better enrichments and better alarms. These files are:/etc/redelk/iplist_customer.conf : public IP addresses of your target, one per line. Including an address here will set a tag for applicable records in the redirhaproxy-* index./etc/redelk/iplist_redteam.conf : public IP addresses of your red team, one per line. Convenient for identifying testing done by red team members. Including an address here will set a tag for applicable records in the redirhaproxy-* index./etc/redelk/iplist_unknown.conf : public IP addresses of gateways that you are not sure about yet, but don’t want to be warned about again. One per line. Including an address here will set a tag for applicable records in the redirhaproxy-* index./etc/redelk/known_sandboxes.conf : beacon characteristics of known AV sandbox systems. One per line. Including data here here will set a tag for applicable records in the rtops-* index./etc/redelk/known_testsystems.conf : beacon characteristics of known test systems. One per line. Including data here here will set a tag for applicable records in the rtops-* index./etc/redelk/alarm.json.config: details required for alarms to work. This includes API keys for online services (Virus Total, IBM X-Force, etc) as well as the SMTP details required for sending alarms via e-mail.If you alter these files prior to your initial setup, these changes will be included in the .tgz packages and can be used for future installations. These files can be found in ./RedELK/elkserver/etc/redelk.To change the authentication onto Nginx, change /etc/nginx/htpasswd.users to include your preferred credentials. Or ./RedELK/elkserver/etc/nginx/htpasswd.users prior to initial setup.Under the hoodIf you want to take a look under the hood on the ELK server, take a look at the redelk cron file in /etc/cron.d/redelk. It starts several scripts in /usr/share/redelk/bin/. Some scripts are for enrichment, others are for alarming. The configuration of these scripts is done with the config files in /etc/redelk/. There is also heavy enrichment done (including the generation of hyperlinks for screenshots, etc) in logstash. You can check that out directly form the logstash config files in /etc/logstash/conf.d/.Current state and features on todo-listThis project is still in alpha phase. This means that it works on our machines and our environment, but no extended testing is performed on different setups. This also means that naming and structure of the code is still subject to change.We are working (and you are invited to contribute) on the following features for next versions:Include the real external IP address of a beacon. As Cobalt Strike has no knowledge of the real external IP address of a beacon session, we need to get this form the traffic index. So far, we have not found a true 100% reliable way for doing this.Support for Apache redirectors. Fully tested and working filebeat and logstash configuration files that support Apache based redirectors. Possibly additional custom log configuration needed for Apache. Low priority.Solve rsyslog max log line issue. Rsyslog (default syslog service on Ubuntu) breaks long syslog lines. Depending on the CS profile you use, this can become an issue. As a result, the parsing of some of the fields are properly parsed by logstash, and thus not properly included in elasticsearch.Ingest manual IOC data. When you are uploading a document, or something else, outside of Cobalt Strike, it will not be included in the IOC list. We want an easy way to have these manual IOCs also included. One way would be to enter the data manually in the activity log of Cobalt Strike and have a logstash filter to scrape the info from there.Ingest e-mails. Create input and filter rules for IMAP mailboxes. This way, we can use the same easy ELK interface for having an overview of sent emails, and replies.User-agent checks. Tagging and alarming on suspicious user-agents. This will probably be divided in hardcoded stuff like curl, wget, etc connecting with the proper C2 URL’s, but also more dynamic analysis of suspicious user-agents.DNS traffic analyses. Ingest, filter and query for suspicious activities on the DNS level. This will take considerable work due to the large amount of noise/bogus DNS queries performed by scanners and online DNS inventory services.Other alarm channels. Think Slack, Telegram, whatever other way you want for receiving alarms.Fine grained authorisation. Possibility for blocking certain views, searches, and dashboards, or masking certain details in some views. Useful for situations where you don’t want to give out all information to all visitors.UsageFirst time loginBrowse to your RedELK server’s IP address and login with the credentials from Nginx (default is redelk:redelk). You are now in a Kibana interface. You may be asked to create a default index for kibana. You can select any of the available indices, it doesn’t matter which one you pick.There are probably two things you want to do here: look at dashboards, or look and search the data in more detail. You can switch between those views using the buttons on the left bar (default Kibana functionality).DashboardsClick on the dashboard icon on the left, and you’ll be given 2 choices: Traffic and Beacon.Looking and searching data in detailClick on the Discover button to look at and search the data in more detail. Once there, click the time range you want to use and click on the ‘Open’ button to use one of the prepared searches with views.Beacon dataWhen selecting the search ‘TimelineOverview’ you are presented with an easy to use view on the data from the Cobalt Strike teamservers, a time line of beacon events if you like. The view includes the relevant columns you want to have, such as timestamp, testscenario name, username, beacon ID, hostname, OS and OS version. Finally, the full message from Cobalt Strike is shown.You can modify this search to your liking. Also, because its elasticsearch, you can search all the data in this index using the search bar.Clicking on the details of a record will show you the full details. An important field for usability is the beaconlogfile field. This field is an hyperlink, linking to the full beacon log file this record is from. Its allows you to look at the beacon transcript in a bigger windows and use CTRL+F within it.ScreenshotsRedELK comes with an easy way of looking at all the screenshots that were made from your targets. Select the ‘Screenshots’ search to get this overview. We added two big usability things: thumbnails and hyperlinks to the full pictures. The thumbnails are there to quickly scroll through and give you an immediate impression: often you still remember what the screenshot looked like.KeystrokesJust as with screenshots, its very handy to have an easy overview of all keystrokes. This search gives you the first lines of cententi, as well as again an hyperlink to the full keystrokes log file.IOC dataTo get a quick list of all IOCs, RedELK comes with an easy overview. Just use the ‘IOCs’ search to get this list. This will present all IOC data from Cobalt Strike, both from files and from services.You can quickly export this list by hitting the ‘Reporting’ button in the top bar to generate a CSV of this exact view.Logging of RedELKDuring installation all actions are logged in a log file in the current working directory.During operations, all RedELK specific logs are logged on the ELK server in /var/log/redelk. You probably only need this for troubleshooting.Authors and contributionThis project is developed and maintained by:Marc Smeets (@smeetsie on Github and @mramsmeets on Twitter).Mark Bergman (@xychix on Github and Twitter)Download RedELK

Link: http://feedproxy.google.com/~r/PentestTools/~3/v3TIGlliuHU/redelk-easy-deployable-tool-for-red.html

Bincat – Binary Code Static Analyser, With IDA Integration

BinCAT is a static Binary Code Analysis Toolkit, designed to help reverse engineers, directly from IDA.It features:value analysis (registers and memory)taint analysistype reconstruction and propagationbackward and forward analysisuse-after-free and double-free detectionIn actionYou can check (an older version of) BinCAT in action here:Basic analysisUsing data taintingCheck the tutorial out to see the corresponding tasks.Quick FAQSupported host platforms:IDA plugin: all, version 6.9 or later (BinCAT uses PyQt, not PySide)analyzer (local or remote): Linux, Windows, macOS (maybe)Supported CPU for analysis (for now):x86-32ARMv7ARMv8PowerPCInstallationOnly IDA v6.9 or later (7 included) are supportedBinary distribution install (recommended)The binary distribution includes everything needed:the analyzerthe IDA pluginInstall steps:Extract the binary distribution of BinCAT (not the git repo)In IDA, click on “File -> Script File…" menu (or type ALT-F7)Select install_plugin.pyBinCAT is now installed in your IDA user dirRestart IDAManual installationAnalyzerThe analyzer can be used locally or through a Web service.On Linux:Using Docker: Docker installation instructionsManual: build and installation instructionsOn Windows:build instructionsIDA PluginWindows manual install.Linux manual installBinCAT should work with IDA on Wine, once pip is installed:download https://bootstrap.pypa.io/get-pip.py (verify it’s good ;)~/.wine/drive_c/Python27/python.exe get-pip.pyUsing BinCATQuick startLoad the plugin by using the Ctrl-Shift-B shortcut, or using the Edit -> Plugins -> BinCAT menu Go to the instruction where you want to start the analysis Select the BinCAT Configuration pane, click <-- Current to define the start address Launch the analysis ConfigurationGlobal options can be configured through the Edit/BinCAT/Options menu.Default config and options are stored in $IDAUSR/idabincat/conf.Options"Use remote bincat": select if you are running docker in a Docker container"Remote URL": http://localhost:5000 (or the URL of a remote BinCAT server)"Autostart": autoload BinCAT at IDA startup"Save to IDB": default state for the save to idb checkboxDocumentationA manual is provided and check here for a description of the configuration file format.A tutorial is provided to help you try BinCAT's features.Article and presentations about BinCATSSTIC 2017, Rennes, France: article (english), slides (french), video of the presentation (french)REcon 2017, Montreal, Canada: slides, videoDownload Bincat

Link: http://www.kitploit.com/2019/02/bincat-binary-code-static-analyser-with.html