Justniffer – Network TCP Packet Sniffer

Justniffer is a network protocol analyzer that captures network traffic and produces logs in a customized way, can emulate Apache web server log files, track response times and extract all “intercepted" files from the HTTP traffic.It lets you interactively trace tcp traffic from a live network or from a previously saved capture file. Justniffer’s native capture file format is libpcap format, which is also the format used by tcpdump and various other tools.Reliable TCP Flow RebuildingThe main Justniffer’s feature is the ability to handle all those complex low level protocol issues and retrieve the correct flow of the TCP/IP traffic: IP fragmentation, TCP retransmission, reordering. etc. It uses portions of Linux kernel source code for handling all TCP/IP stuff. Precisely, it uses a slightly modified version of the libnids libraries that already include a modified version of Linux code in a more reusable way.Optimized for "Request / Response" protocols. It is able to track server response timeJustniffer was born as tool for helping in analyzing performance problem in complex network environment when it becomes impractical to analyze network captures solely using low level packet sniffers (wireshark , tcpdump, etc.) . It will help you to quickly identify the most significant bottlenecks analyzing the performance at "application" protocol level.In very complex and distributed systems is often useful to understand how communication takes place between different components, and when this is implemented as a network protocol based on TCP/IP (HTTP, JDBC, RTSP, SIP, SMTP, IMAP, POP, LDAP, REST, XML-RPC, IIOP, SOAP, etc.), justniffer comes in handy. Often the logging level and monitoring systems of these systems does not report important information to determine performance issues such as the response time of each network request. Because they are in a "production" environment and cannot be too much verbose or they are in-house developed applications and do not provide such logging.Other times it is desirable to collect access logs from web services implemented on different environments (various web servers, application servers, python web frameworks, etc.) or web services that are not accessible and therefore traceable only on client side.Justniffer can capture traffic in promiscuous mode so it can be installed on dedicated and independent station within the same network "collision domain" of the gateway of the systems that must be analyzed, collecting all traffic without affecting the system performances and requiring invasive installation of new software in production environments.Can rebuild and save HTTP content on filesThe robust implementation for the reconstruction of the TCP flow turns it in a multipurpose sniffer.HTTP snifferLDAP snifferSMTP snifferSIP snifferpassword snifferjustniffer can also be used to retrieve files sent over the network.It is extensibleCan be extended by external scripts. A python script has been developed to recover all files sent via HTTP (images, text, html, javascript, etc.).Features Summary Reliable TCP flow rebuilding: it can reorder, reassemble tcp segments and ip fragments using portions of the Linux kernel code Logging text mode can be customized Extensibility by any executable, such as bash, python, perl scripts, ELF executable, etc. Performance measurement it can collect many information on performances: connection time, close time, request time , response time, close time, etc.Download Justniffer

Link: http://feedproxy.google.com/~r/PentestTools/~3/ZeOTT8XrMaE/justniffer-network-tcp-packet-sniffer.html

Volatility Workbench – A GUI For Volatility Memory Forensics

Volatility Workbench is a graphical user interface (GUI) for the Volatility tool. Volatility is a command line memory analysis and forensics tool for extracting artifacts from memory dumps. Volatility Workbench is free, open source and runs in Windows. It provides a number of advantages over the command line version including:No need of remembering command line parameters.Storage of the operating system profile, KDBG address and process list with the memory dump, in a .CFG file. When a memory image is re-loaded, this saves a lot of time and avoids the frustration of not knowing the correct profile to select.Simpler copy & paste.Simpler printing of paper copies (via right click).Simpler saving of the dumped information to a file on disk.A drop down list of available commands and a short description of what the command does.Time stamping of the commands executed.Auto-loading the first dump file found in the current folder.Support for analysing Mac and Linux memory dumps.Download Volatility Workbench

Link: http://feedproxy.google.com/~r/PentestTools/~3/OzWarBRi5YU/volatility-workbench-gui-for-volatility.html

HTTrack Website Copier – Web Crawler And Offline Browser

HTTrack allows you to download a World Wide Web site from the Internet to a local directory, building recursively all directories, getting HTML, images, and other files from the server to your computer. HTTrack arranges the original site’s relative link-structure. Simply open a page of the “mirrored" website in your browser, and you can browse the site from link to link, as if you were viewing it online. HTTrack can also update an existing mirrored site, and resume interrupted downloads. HTTrack is fully configurable, and has an integrated help system.WinHTTrack is the Windows (from Windows 2000 to Windows 10 and above) release of HTTrack, and WebHTTrack the Linux/Unix/BSD release.Download HTTrack Website Copier

Link: http://feedproxy.google.com/~r/PentestTools/~3/-iUl75kJzG4/httrack-website-copier-web-crawler-and.html

OSFMount – Mount Disk Images & Create RAM Drives

OSFMount allows you to mount local disk image files (bit-for-bit copies of a disk partition) in Windows with a drive letter. You can then analyze the disk image file with PassMark OSForensics™ by using the mounted volume’s drive letter. By default, the image files are mounted as read only so that the original image files are not altered. OSFMount also supports the creation of RAM disks, basically a disk mounted into RAM. This generally has a large speed benefit over using a hard disk. As such this is useful with applications requiring high speed disk access, such a database applications, games (such as game cache files) and browsers (cache files). A second benefit is security, as the disk contents are not stored on a physical hard disk (but rather in RAM) and on system shutdown the disk contents are not persistent. At the time of writing, we believe this is the fastest RAM drive software available. OSFMount supports mounting images of CDs in .ISO format , which can be useful when a particular CD is used often and the speed of access is important.Download OSFMount

Link: http://feedproxy.google.com/~r/PentestTools/~3/b1UlY7C2tko/osfmount-mount-disk-images-create-ram.html

Process Hacker – A Free, Powerful, Multi-Purpose Tool That Helps You Monitor System Resources, Debug Software And Detect Malware

A free, powerful, multi-purpose tool that helps you monitor system resources, debug software and detect malware.System requirementsWindows 7 or higher, 32-bit or 64-bit.FeaturesA detailed overview of system activity with highlighting.Graphs and statistics allow you quickly to track down resource hogs and runaway processes.Can’t edit or delete a file? Discover which processes are using that file.See what programs have active network connections, and close them if necessary.Get real-time information on disk access.View detailed stack traces with kernel-mode, WOW64 and .NET support.Go beyond services.msc: create, edit and control services.Small, portable and no installation required.100% Free Software (GPL v3)Building the projectRequires Visual Studio (2017 or later).Execute build_release.cmd located in the build directory to compile the project or load the ProcessHacker.sln and Plugins.sln solutions if you prefer building the project using Visual Studio.You can download the free Visual Studio Community Edition to build, run or develop Process Hacker.Additional informationYou cannot run the 32-bit version of Process Hacker on a 64-bit system and expect it to work correctly, unlike other programs.Enhancements/BugsPlease use the GitHub issue tracker for reporting problems or suggesting new features.SettingsIf you are running Process Hacker from a USB drive, you may want to save Process Hacker’s settings there as well. To do this, create a blank file named “ProcessHacker.exe.settings.xml" in the same directory as ProcessHacker.exe. You can do this using Windows Explorer:Make sure "Hide extensions for known file types" is unticked in Tools > Folder options > View.Right-click in the folder and choose New > Text Document.Rename the file to ProcessHacker.exe.settings.xml (delete the ".txt" extension).PluginsPlugins can be configured from Hacker > Plugins.If you experience any crashes involving plugins, make sure they are up to date.Disk and Network information provided by the ExtendedTools plugin is only available when running Process Hacker with administrative rights.KProcessHackerProcess Hacker uses a kernel-mode driver, KProcessHacker, to assist with certain functionality. This includes:Capturing kernel-mode stack tracesMore efficiently enumerating process handlesRetrieving names for file handlesRetrieving names for EtwRegistration objectsSetting handle attributesNote that by default, KProcessHacker only allows connections from processes with administrative privileges (SeDebugPrivilege). To allow Process Hacker to show details for all processes when it is not running as administrator:In Registry Editor, navigate to: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\KProcessHacker3Under this key, create a key named Parameters if it does not exist.Create a DWORD value named SecurityLevel and set it to 2. If you are not using an official build, you may need to set it to 0 instead.Restart the KProcessHacker3 service (sc stop KProcessHacker3, sc start KProcessHacker3).Download Processhacker

Link: http://feedproxy.google.com/~r/PentestTools/~3/nL_bfHHeQgA/process-hacker-free-powerful-multi.html

CANalyzat0r – Security Analysis Toolkit For Proprietary Car Protocols

This software project is a result of a Bachelor’s thesis created at SCHUTZWERK in collaboration with Aalen University by Philipp Schmied.Please refer to the corresponding blog post for more information.Why another CAN tool?Built from scratch with new ideas for analysis mechanismsBundles features of many other tools in one placeModular and extensible: Read the docs and implement your own analysis mechanismsComfortable analysis using a GUIManage work in separate projects using a databaseDocumentation: Read the docs if you need a manual or technical info.Installing and running:Run sudo ./install_requirements.sh along with sudo -E ./CANalyzat0r.sh. This will create a folder called pipenv with a pipenv environment in it.Or just use the docker version which is recommended at this time (Check the README.md file in the subdirectory)For more information, read the HTML or PDF version of the documentation in the ./doc/build folder.FeaturesManage interface configuration (automatic loading of kernel modules, manage physical and virtual SocketCAN devices)Multi interface supportManage your work in projects. You can also import and export them in the human readable/editable JSON formatLogging of all actionsGraphical sniffingManage findings, dumps and known packets per project Easy copy and paste between tabs. Also, you can just paste your SocketCAN files into a table that allows pasting Threaded Sending, Fuzzing and Sniffing Sniffing at the same timeAdd multiple analyzing threads on the GUIIgnore packets when sniffing – Automatically filter unique packets by ID or data and IDCompare dumpsAllows setting up complex setups using only one windowClean organization in tabs for each analysis taskBinary packet filtering with randomizationSearch for action specific packets using background noise filteringSQLite supportFuzz and change the values on the flyTesting ItYou can use the Instrument Cluster Simulator in order to tinker with a virtual CAN bus without having to attach real CAN devices to your machine.TroubleshootingEmpty GUI WindowsPlease make sure that the QT_X11_NO_MITSHM environment variable is set to 1. When using sudo, please include the -E option in order to preserve this environment variable as follows: sudo -E ./CANalyzat0r.sh.Fixing the GUI styleThis application has to be run as superuser. Because of a missing configuration, the displayed style can be set to an unwanted value when the effective UID is 0. To fix this behaviour, follow these steps:Quick way: Execute echo “[QT]\nstyle=CleanLooks" >> ~/.config/Trolltech.conf Alternative way: Install qt4-qtconfig: sudo apt-get install qt4-qtconfigRun qtconfig-qt4 as superuser and change the GUI style to CleanLooks or GTK+Or use the docker containerDownload CANalyzat0r

Link: http://feedproxy.google.com/~r/PentestTools/~3/KPeA8qxDNEk/canalyzat0r-security-analysis-toolkit.html

DFIRTrack – The Incident Response Tracking Application

DFIRTrack (Digital Forensics and Incident Response Tracking application) is an open source web application mainly based on Django using a PostgreSQL database backend.In contrast to other great incident response tools, which are mainly case-based and support the work of CERTs, SOCs etc. in their daily business, DFIRTrack is focused on handling one major incident with a lot of affected systems as it is often observed in APT cases. It is meant to be used as a tool for dedicated incident response teams in large cases. So, of course, CERTs and SOCs may use DFIRTrack as well, but they may feel it will be more appropriate in special cases instead of every day work.In contrast to case-based applications, DFIRTrack works in a system-based fashion. It keeps track of the status of various systems and the tasks associated with them, keeping the analyst well-informed about the status and number of affected systems at any time during the investigation phase up to the remediation phase of the incident response process.FeaturesOne focus is the fast and reliable import and export of systems and associated information. The goal for importing systems is to provide a fast and error-free procedure. Moreover, the goal for exporting systems and their status is to have multiple instances of documentation: for instance, detailed Markdown reports for technical staff vs. spreadsheets for non-technical audiences without redundancies and deviations in the data sets. A manager whose numbers match is a happy manager! ;-)The following functions are implemented for now:ImporterCreator (fast creation of multiple related instances via web interface) for systems and tasks,CSV (simple and generic CSV based import (either hostname and IP or hostname and tags combined with a web form), should fit for the export capabilities of many tools),Markdown for entries (one entry per system(report)).ExporterMarkdown for so-called system reports (for use in a MkDocs structure),Spreadsheet (CSV and XLS),LaTeX (planned).Installation and dependenciesDFIRTrack is developed for deploying on Debian Stretch or Ubuntu 16.04. Other Debian based distributions or versions may work but were not tested yet. At the moment the project will be focused on Ubuntu LTS and Debian releases.For fast and uncomplicated installation on a dedicated server including all dependencies an Ansible playbook and role was written (available here). For testing a docker environment was prepared (see below).For a minimal setup the following dependencies are needed:django (2.0),django_q,djangorestframework,gunicorn,postgresql,psycopg2-binary,python3-pip,PyYAML,requests,virtualenv,xlwt.Note that there is no settings.py in this repository. This file is submitted via Ansible or has to be copied and configured by hand. That will be changed in the future (see issues for more information).Docker EnvironmentAn experimental Docker Compose environment for local-only usage is provided in this project. Run the following command in the project root directory to start the environment:docker-compose upA user admin is already created. A password can be set with:docker/setup_admin.shThe application is located at localhost:8000.Built-in softwareThe application was created by implementing the following libraries and code:Bootstrapclipboard.jsDataTablesjQueryOpen IconicPopper.jsDevelopmentThere are two main branches:masterdevelopmentThe master branch should be stable (as you can expect from an alpha version). New features and changes are added to the development branch and merged into master from time to time. Everything merged into development should run too but might need manual changes (e. g. config). Devolopment branch of DFIRTrack Ansible should follow these changes. So if you want to see the latest features and progress: “check out" development.DisclaimerThis software is in an early alpha phase so a lot of work has to be done. Even if some basic error checking is implemented, as of now the usage of DFIRTrack mainly depends on proper handling.DFIRTrack was not and most likely will never be intended for usage on publicly available servers. Nevertheless some basic security features were implemented (in particular in connection with the corresponding ansible role) always install DFIRTrack in a secured environment (e. g. a dedicated virtual machine or in a separated network)!Download Dfirtrack

Link: http://feedproxy.google.com/~r/PentestTools/~3/vHFBZOQWsMA/dfirtrack-incident-response-tracking.html

RedELK – Easy Deployable Tool For Red Teams Used For Tracking And Alarming About Blue Team Activities As Well As Better Usability In Long Term Operations

Red Team’s SIEM – easy deployable tool for Red Teams used for tracking and alarming about Blue Team activities as well as better usability for the Red Team in long term operations.Initial public release at BruCON 2018:Video: https://www.youtube.com/watch?v=OjtftdPts4gPresentation slides: https://github.com/outflanknl/Presentations/blob/master/MirrorOnTheWall_BruCon2018_UsingBlueTeamTechniquesinRedTeamOps_Bergman-Smeets_FINAL.pdfGoal of the projectShort: a Red Team’s SIEM.Longer: a Red Team’s SIEM that serves three goals:Enhanced usability and overview for the red team operators by creating a central location where all relevant operational logs from multiple teamservers are collected and enriched. This is great for historic searching within the operation as well as giving a read-only view on the operation (e.g. for the White Team). Especially useful for multi-scenario, multi-teamserver, multi-member and multi-month operations. Also, super easy ways for viewing all screenshots, IOCs, keystrokes output, etc. \o/Spot the Blue Team by having a central location where all traffic logs from redirectors are collected and enriched. Using specific queries its now possible to detect that the Blue Team is investigating your infrastructure.Out-of-the-box usable by being easy to install and deploy, as well as having ready made views, dashboards and alarms.Here’s a conceptual overview of how RedELK works.RedELK uses the typical components Filebeat (shipping), Logstash (filtering), Elasticsearch (storage) and Kibana (viewing). Rsync is used for a second syncing of teamserver data: logs, keystrokes, screenshots, etc. Nginx is used for authentication to Kibana, as well as serving the screenshots, beaconlogs, keystrokes in an easy way in the operator’s browser.A set of python scripts are used for heavy enriching of the log data, and for for Blue Team detection.Supported tech and requirementsRedELK currently supports:Cobalt Strike teamserversHAProxy for HTTP redirector data. Apache support is expected soon.Tested on Ubuntu 16 LTSRedELK requires a modification to the default haproxy configuration in order to log more details.In the ‘general’ section:log-format frontend:%f/%H/%fi:%fp\ backend:%b\ client:%ci:%cp\ GMT:%T\ useragent:%[capture.req.hdr(1)]\ body:%[capture.req.hdr(0)]\ request:%rAt ‘frontend’ section:declare capture request len 40000http-request capture req.body id 0capture request header User-Agent len 512InstallationFirst time installationAdjust ./certs/config.cnf to include the right details for the TLS certificates. Once done, run: initial-setup.sh This will create a CA, generate necessary certificates for secure communication between redirs, teamserver and elkserver and generates a SSH keypair for secure rsync authentication of the elkserver to the teamserver. It also generates teamservers.tgz, redirs.tgz and elkserver.tgz that contain the installation packages for each component. Rerunning this initial setup is not required. But if you want new certificates for a new operation, you can simply run this again.Installation of redirectorsCopy and extract redirs.tgz on your redirector as part of your red team infra deployment procedures. Run: install-redir.sh $FilebeatID $ScenarioName $IP/DNS:PORT$FilebeatID is the identifier of this redirector within filebeat.$ScenarioName is the name of the attack scenario this redirector is used for.$IP/DNS:PORT is the IP or DNS name and port where filebeat logs are shipped to.This script will set the timezone (default Europe/Amsterdam), install filebeat and dependencies, install required certificates, adjust the filebeat configuration and start filebeat.Installation of teamserverCopy and extract teamservers.tgz on your Cobalt Strike teamserver as part of your red team infra deployment procedures. Run: install-teamserver.sh $FilebeatID $ScenarioName $IP/DNS:PORT$FilebeatID is the identifier of this teamserver within filebeat.$ScenarioName is the name of the attack scenario this teamserver is used for.$IP/DNS:PORT is the IP or DNS name and port where filebeat logs are shipped to.This script will warn if filebeat is already installed (important as ELK and filebeat sometimes are very picky about having equal versions), set the timezone (default Europe/Amsterdam), install filebeat and dependencies, install required certificates, adjust the filebeat configuration, start filebeat, create a local user ‘scponly’ and limit that user to SSH key-based auth via scp/sftp/rsync.Installation of ELK serverCopy and extract elkserver.tgz on your RedELK server as part of your red team infra deployment procedures. Run: install-teamserver.sh This script will set the timezone (default Europe/Amsterdam), install logstash, elasticsearch, kibana and dependencies, install required certificates, deploy the logstash configuration and required custom ruby enrichment scripts, download GeoIP databases, install Nginx, configure Nginx, create a local user ‘redelk’ with the earlier generated SSH keys, install the script for rsyncing of remote logs on teamservers, install the script used for creating of thumbnails of screenshots, install the RedELK configuration files, install crontab file for RedELK tasks, install GeoIP elasticsearch plugins and adjust the template, install the python enrichment scripts, and finally install the python blue team detection scripts.You are not done yet. You need to manually enter the details of your teamservers in /etc/cron.d/redelk, as well as tune the config files in /etc/redelk (see section below).Setting up enrichment and detectionOn the ELK server in the /etc/redelk directory you can find several files that you can use to tune your RedELK instance for better enrichments and better alarms. These files are:/etc/redelk/iplist_customer.conf : public IP addresses of your target, one per line. Including an address here will set a tag for applicable records in the redirhaproxy-* index./etc/redelk/iplist_redteam.conf : public IP addresses of your red team, one per line. Convenient for identifying testing done by red team members. Including an address here will set a tag for applicable records in the redirhaproxy-* index./etc/redelk/iplist_unknown.conf : public IP addresses of gateways that you are not sure about yet, but don’t want to be warned about again. One per line. Including an address here will set a tag for applicable records in the redirhaproxy-* index./etc/redelk/known_sandboxes.conf : beacon characteristics of known AV sandbox systems. One per line. Including data here here will set a tag for applicable records in the rtops-* index./etc/redelk/known_testsystems.conf : beacon characteristics of known test systems. One per line. Including data here here will set a tag for applicable records in the rtops-* index./etc/redelk/alarm.json.config: details required for alarms to work. This includes API keys for online services (Virus Total, IBM X-Force, etc) as well as the SMTP details required for sending alarms via e-mail.If you alter these files prior to your initial setup, these changes will be included in the .tgz packages and can be used for future installations. These files can be found in ./RedELK/elkserver/etc/redelk.To change the authentication onto Nginx, change /etc/nginx/htpasswd.users to include your preferred credentials. Or ./RedELK/elkserver/etc/nginx/htpasswd.users prior to initial setup.Under the hoodIf you want to take a look under the hood on the ELK server, take a look at the redelk cron file in /etc/cron.d/redelk. It starts several scripts in /usr/share/redelk/bin/. Some scripts are for enrichment, others are for alarming. The configuration of these scripts is done with the config files in /etc/redelk/. There is also heavy enrichment done (including the generation of hyperlinks for screenshots, etc) in logstash. You can check that out directly form the logstash config files in /etc/logstash/conf.d/.Current state and features on todo-listThis project is still in alpha phase. This means that it works on our machines and our environment, but no extended testing is performed on different setups. This also means that naming and structure of the code is still subject to change.We are working (and you are invited to contribute) on the following features for next versions:Include the real external IP address of a beacon. As Cobalt Strike has no knowledge of the real external IP address of a beacon session, we need to get this form the traffic index. So far, we have not found a true 100% reliable way for doing this.Support for Apache redirectors. Fully tested and working filebeat and logstash configuration files that support Apache based redirectors. Possibly additional custom log configuration needed for Apache. Low priority.Solve rsyslog max log line issue. Rsyslog (default syslog service on Ubuntu) breaks long syslog lines. Depending on the CS profile you use, this can become an issue. As a result, the parsing of some of the fields are properly parsed by logstash, and thus not properly included in elasticsearch.Ingest manual IOC data. When you are uploading a document, or something else, outside of Cobalt Strike, it will not be included in the IOC list. We want an easy way to have these manual IOCs also included. One way would be to enter the data manually in the activity log of Cobalt Strike and have a logstash filter to scrape the info from there.Ingest e-mails. Create input and filter rules for IMAP mailboxes. This way, we can use the same easy ELK interface for having an overview of sent emails, and replies.User-agent checks. Tagging and alarming on suspicious user-agents. This will probably be divided in hardcoded stuff like curl, wget, etc connecting with the proper C2 URL’s, but also more dynamic analysis of suspicious user-agents.DNS traffic analyses. Ingest, filter and query for suspicious activities on the DNS level. This will take considerable work due to the large amount of noise/bogus DNS queries performed by scanners and online DNS inventory services.Other alarm channels. Think Slack, Telegram, whatever other way you want for receiving alarms.Fine grained authorisation. Possibility for blocking certain views, searches, and dashboards, or masking certain details in some views. Useful for situations where you don’t want to give out all information to all visitors.UsageFirst time loginBrowse to your RedELK server’s IP address and login with the credentials from Nginx (default is redelk:redelk). You are now in a Kibana interface. You may be asked to create a default index for kibana. You can select any of the available indices, it doesn’t matter which one you pick.There are probably two things you want to do here: look at dashboards, or look and search the data in more detail. You can switch between those views using the buttons on the left bar (default Kibana functionality).DashboardsClick on the dashboard icon on the left, and you’ll be given 2 choices: Traffic and Beacon.Looking and searching data in detailClick on the Discover button to look at and search the data in more detail. Once there, click the time range you want to use and click on the ‘Open’ button to use one of the prepared searches with views.Beacon dataWhen selecting the search ‘TimelineOverview’ you are presented with an easy to use view on the data from the Cobalt Strike teamservers, a time line of beacon events if you like. The view includes the relevant columns you want to have, such as timestamp, testscenario name, username, beacon ID, hostname, OS and OS version. Finally, the full message from Cobalt Strike is shown.You can modify this search to your liking. Also, because its elasticsearch, you can search all the data in this index using the search bar.Clicking on the details of a record will show you the full details. An important field for usability is the beaconlogfile field. This field is an hyperlink, linking to the full beacon log file this record is from. Its allows you to look at the beacon transcript in a bigger windows and use CTRL+F within it.ScreenshotsRedELK comes with an easy way of looking at all the screenshots that were made from your targets. Select the ‘Screenshots’ search to get this overview. We added two big usability things: thumbnails and hyperlinks to the full pictures. The thumbnails are there to quickly scroll through and give you an immediate impression: often you still remember what the screenshot looked like.KeystrokesJust as with screenshots, its very handy to have an easy overview of all keystrokes. This search gives you the first lines of cententi, as well as again an hyperlink to the full keystrokes log file.IOC dataTo get a quick list of all IOCs, RedELK comes with an easy overview. Just use the ‘IOCs’ search to get this list. This will present all IOC data from Cobalt Strike, both from files and from services.You can quickly export this list by hitting the ‘Reporting’ button in the top bar to generate a CSV of this exact view.Logging of RedELKDuring installation all actions are logged in a log file in the current working directory.During operations, all RedELK specific logs are logged on the ELK server in /var/log/redelk. You probably only need this for troubleshooting.Authors and contributionThis project is developed and maintained by:Marc Smeets (@smeetsie on Github and @mramsmeets on Twitter).Mark Bergman (@xychix on Github and Twitter)Download RedELK

Link: http://feedproxy.google.com/~r/PentestTools/~3/v3TIGlliuHU/redelk-easy-deployable-tool-for-red.html

Bincat – Binary Code Static Analyser, With IDA Integration

BinCAT is a static Binary Code Analysis Toolkit, designed to help reverse engineers, directly from IDA.It features:value analysis (registers and memory)taint analysistype reconstruction and propagationbackward and forward analysisuse-after-free and double-free detectionIn actionYou can check (an older version of) BinCAT in action here:Basic analysisUsing data taintingCheck the tutorial out to see the corresponding tasks.Quick FAQSupported host platforms:IDA plugin: all, version 6.9 or later (BinCAT uses PyQt, not PySide)analyzer (local or remote): Linux, Windows, macOS (maybe)Supported CPU for analysis (for now):x86-32ARMv7ARMv8PowerPCInstallationOnly IDA v6.9 or later (7 included) are supportedBinary distribution install (recommended)The binary distribution includes everything needed:the analyzerthe IDA pluginInstall steps:Extract the binary distribution of BinCAT (not the git repo)In IDA, click on “File -> Script File…" menu (or type ALT-F7)Select install_plugin.pyBinCAT is now installed in your IDA user dirRestart IDAManual installationAnalyzerThe analyzer can be used locally or through a Web service.On Linux:Using Docker: Docker installation instructionsManual: build and installation instructionsOn Windows:build instructionsIDA PluginWindows manual install.Linux manual installBinCAT should work with IDA on Wine, once pip is installed:download https://bootstrap.pypa.io/get-pip.py (verify it’s good ;)~/.wine/drive_c/Python27/python.exe get-pip.pyUsing BinCATQuick startLoad the plugin by using the Ctrl-Shift-B shortcut, or using the Edit -> Plugins -> BinCAT menu Go to the instruction where you want to start the analysis Select the BinCAT Configuration pane, click <-- Current to define the start address Launch the analysis ConfigurationGlobal options can be configured through the Edit/BinCAT/Options menu.Default config and options are stored in $IDAUSR/idabincat/conf.Options"Use remote bincat": select if you are running docker in a Docker container"Remote URL": http://localhost:5000 (or the URL of a remote BinCAT server)"Autostart": autoload BinCAT at IDA startup"Save to IDB": default state for the save to idb checkboxDocumentationA manual is provided and check here for a description of the configuration file format.A tutorial is provided to help you try BinCAT's features.Article and presentations about BinCATSSTIC 2017, Rennes, France: article (english), slides (french), video of the presentation (french)REcon 2017, Montreal, Canada: slides, videoDownload Bincat

Link: http://www.kitploit.com/2019/02/bincat-binary-code-static-analyser-with.html

Fwknop – Single Packet Authorization & Port Knocking

fwknop implements an authorization scheme known as Single Packet Authorization (SPA) for strong service concealment. SPA requires only a single packet which is encrypted, non-replayable, and authenticated via an HMAC in order to communicate desired access to a service that is hidden behind a firewall in a default-drop filtering stance. The main application of SPA is to use a firewall to drop all attempts to connect to services such as SSH in order to make the exploitation of vulnerabilities (both 0-day and unpatched code) more difficult. Because there are no open ports, any service that is concealed by SPA naturally cannot be scanned for with Nmap. The fwknop project supports four different firewalls: iptables, firewalld, PF, and ipfw across Linux, OpenBSD, FreeBSD, and Mac OS X. There is also support for custom scripts so that fwknop can be made to support other infrastructure such as ipset or nftables.SPA is essentially next generation Port Knocking (PK), but solves many of the limitations exhibited by PK while retaining its core benefits. PK limitations include a general difficulty in protecting against replay attacks, asymmetric ciphers and HMAC schemes are not usually possible to reliably support, and it is trivially easy to mount a DoS attack against a PK server just by spoofing an additional packet into a PK sequence as it traverses the network (thereby convincing the PK server that the client doesn’t know the proper sequence). All of these shortcomings are solved by SPA. At the same time, SPA hides services behind a default-drop firewall policy, acquires SPA data passively (usually via libpcap or other means), and implements standard cryptographic operations for SPA packet authentication and encryption/decryption.SPA packets generated by fwknop leverage HMAC for authenticated encryption in the encrypt-then-authenticate model. Although the usage of an HMAC is currently optional (enabled via the –use-hmac command line switch), it is highly recommended for three reasons:Without an HMAC, cryptographically strong authentication is not possible with fwknop unless GnuPG is used, but even then an HMAC should still be applied.An HMAC applied after encryption protects against cryptanalytic CBC-mode padding oracle attacks such as the Vaudenay attack and related trickery (like the more recent “Lucky 13" attack against SSL).The code required by the fwknopd daemon to verify an HMAC is much more simplistic than the code required to decrypt an SPA packet, so an SPA packet without a proper HMAC isn’t even sent through the decryption routines.The final reason above is why an HMAC should still be used even when SPA packets are encrypted with GnuPG due to the fact that SPA data is not sent through libgpgme functions unless the HMAC checks out first. GnuPG and libgpgme are relatively complex bodies of code, and therefore limiting the ability of a potential attacker to interact with this code through an HMAC operation helps to maintain a stronger security stance. Generating an HMAC for SPA communications requires a dedicated key in addition to the normal encryption key, and both can be generated with the –key-gen option.fwknop encrypts SPA packets either with the Rijndael block cipher or via GnuPG and associated asymmetric cipher. If the symmetric encryption method is chosen, then as usual the encryption key is shared between the client and server (see the /etc/fwknop/access.conf file for details). The actual encryption key used for Rijndael encryption is generated via the standard PBKDF1 key derivation algorithm, and CBC mode is set. If the GnuPG method is chosen, then the encryption keys are derived from GnuPG key rings.Use CasesPeople who use Single Packet Authorization (SPA) or its security-challenged cousin Port Knocking (PK) usually access SSHD running on the same system where the SPA/PK software is deployed. That is, a firewall running on a host has a default-drop policy against all incoming SSH connections so that SSHD cannot be scanned, but a SPA daemon reconfigures the firewall to temporarily grant access to a passively authenticated SPA client: "Basic SPA usage to access SSHD"fwknop supports the above, but also goes much further and makes robust usage of NAT (for iptables/firewalld firewalls). After all, important firewalls are usually gateways between networks as opposed to just being deployed on standalone hosts. NAT is commonly used on such firewalls (at least for IPv4 communications) to provide Internet access to internal networks that are on RFC 1918 address space, and also to allow external hosts access to services hosted on internal systems.Because fwknop integrates with NAT, SPA can be leveraged to access internal services through the firewall by users on the external Internet. Although this has plenty of applications on modern traditional networks, it also allows fwknop to support cloud computing environments such as Amazon’s AWS: "SPA usage on Amazon AWS cloud environments"User InterfaceThe official cross-platform fwknop client user interface fwknop-gui (download, github) is developed by Jonathan Bennett. Most major client-side SPA modes are supported including NAT requests, HMAC and Rijndael keys (GnuPG is not yet supported), fwknoprc stanza saving, and more. Currently fwknop-gui runs on Linux, Mac OS X, and Windows – here is a screenshot from OS X:  "fwknop-gui on Mac OS X" Similarly, an updated Android client is available as well.TutorialA comprehensive tutorial on fwknop can be found here:http://www.cipherdyne.org/fwknop/docs/fwknop-tutorial.htmlFeaturesThe following is a complete list of features supported by the fwknop project:Implements Single Packet Authorization around iptables and firewalld firewalls on Linux, ipfw firewalls on *BSD and Mac OS X, and PF on OpenBSD.The fwknop client runs on Linux, Mac OS X, *BSD, and Windows under Cygwin. In addition, there is an Android app to generate SPA packets.Supports both Rijndael and GnuPG methods for the encryption/decryption of SPA packets.Supports HMAC authenticated encryption for both Rijndael and GnuPG. The order of operation is encrypt-then-authenticate to avoid various cryptanalytic problems.Replay attacks are detected and thwarted by SHA-256 digest comparison of valid incoming SPA packets. Other digest algorithms are also supported, but SHA-256 is the default.SPA packets are passively sniffed from the wire via libpcap. The fwknopd server can also acquire packet data from a file that is written to by a separate Ethernet sniffer (such as with tcpdump -w ), from the iptables ULOG pcap writer, or directly via a UDP socket in –udp-server mode.For iptables firewalls, ACCEPT rules added by fwknop are added and deleted (after a configurable timeout) from custom iptables chains so that fwknop does not interfere with any existing iptables policy that may already be loaded on the system.Supports inbound NAT connections for authenticated SPA communications (iptables firewalls only for now). This means fwknop can be configured to create DNAT rules so that you can reach a service (such as SSH) running on an internal system on an RFC 1918 IP address from the open Internet. SNAT rules are also supported which essentially turns fwknopd into a SPA-authenticating gateway to access the Internet from an internal network.Multiple users are supported by the fwknop server, and each user can be assigned their own symmetric or asymmetric encryption key via the /etc/fwknop/access.conf file.Automatic resolution of external IP address via https://www.cipherdyne.org/cgi-bin/myip (this is useful when the fwknop client is run from behind a NAT device). Because the external IP address is encrypted within each SPA packet in this mode, Man-in-the-Middle (MITM) attacks where an inline device intercepts an SPA packet and only forwards it from a different IP in an effort to gain access are thwarted.Port randomization is supported for the destination port of SPA packets as well as the port over which the follow-on connection is made via the iptables NAT capabilities. The later applies to forwarded connections to internal services and to access granted to local sockets on the system running fwknopd.Integration with Tor (as described in this DefCon 14 presentation). Note that because Tor uses TCP for transport, sending SPA packets through the Tor network requires that each SPA packet is sent over an established TCP connection, so technically this breaks the "single" aspect of "Single Packet Authorization". However, Tor provides anonymity benefits that can outweigh this consideration in some deployments.Implements a versioned protocol for SPA communications, so it is easy to extend the protocol to offer new SPA message types and maintain backwards compatibility with older fwknop clients at the same time.Supports the execution of shell commands on behalf of valid SPA packets.The fwknop server can be configured to place multiple restrictions on inbound SPA packets beyond those enforced by encryption keys and replay attack detection. Namely, packet age, source IP address, remote user, access to requested ports, and more.Bundled with fwknop is a comprehensive test suite that issues a series of tests designed to verify that both the client and server pieces of fwknop work properly. These tests involve sniffing SPA packets over the local loopback interface, building temporary firewall rules that are checked for the appropriate access based on the testing config, and parsing output from both the fwknop client and fwknopd server for expected markers for each test. Test suite output can easily be anonymized for communication to third parties for analysis.fwknop was the first program to integrate port knocking with passive OS fingerprinting. However, Single Packet Authorization offers many security benefits beyond port knocking, so the port knocking mode of operation is generally deprecated.Building fwknopThis distribution uses GNU autoconf for setting up the build. Please see the INSTALL file for the general basics on using autoconf.There are some "configure" options that are specific to fwknop. They are (extracted from ./configure –help): –disable-client Do not build the fwknop client component. The default is to build the client. –disable-server Do not build the fwknop server component. The default is to build the server. –with-gpgme support for gpg encryption using libgpgme [default=check] –with-gpgme-prefix=PFX prefix where GPGME is installed (optional) –with-gpg=/path/to/gpg Specify path to the gpg executable that gpgme will use [default=check path] –with-firewalld=/path/to/firewalld Specify path to the firewalld executable [default=check path] –with-iptables=/path/to/iptables Specify path to the iptables executable [default=check path] –with-ipfw=/path/to/ipfw Specify path to the ipfw executable [default=check path] –with-pf=/path/to/pfctl Specify path to the pf executable [default=check path] –with-ipf=/path/to/ipf Specify path to the ipf executable [default=check path]Examples:./configure –disable-client –with-firewalld=/bin/firewall-cmd./configure –disable-client –with-iptables=/sbin/iptables –with-firewalld=noDownload Fwknop

Link: http://www.kitploit.com/2019/02/fwknop-single-packet-authorization-port.html