Joy – A Package For Capturing And Analyzing Network Flow Data And Intraflow Data, For Network Research, Forensics, And Security Monitoring

Joy is a BSD-licensed libpcap-based software package for extracting data features from live network traffic or packet capture (pcap) files, using a flow-oriented model similar to that of IPFIX or Netflow, and then representing these data features in JSON. It also contains analysis tools that can be applied to these data files. Joy can be used to explore data at scale, especially security and threat-relevant data.JSON is used in order to make the output easily consumable by data analysis tools. While the JSON output files are somewhat verbose, they are reasonably small, and they respond well to compression.Joy can be configured to obtain intraflow data, that is, data and information about events that occur within a network flow, including:the sequence of lengths and arrival times of IP packets, up to some configurable number of packets.the empirical probability distribution of the bytes within the data portion of a flow, and the entropy derived from that value,the sequence of lengths and arrival times of TLS records,other non-encrypted TLS data, such as the list of offered ciphersuites, the selected ciphersuite, the length of the clientKeyExchange field, and the server certificate strings,DNS names, addresses, and TTLs,HTTP header elements and the first eight bytes of the HTTP body, andthe name of the process associated with the flow, for flows originate or terminate on the host on which pcap is running.Joy is intended for use in security research, forensics, and for the monitoring of (small scale) networks to detect vulnerabilities, threats and other unauthorized or unwanted behavior. Researchers, administrators, penetration testers, and security operations teams can put this information to good use, for the protection of the networks being monitored, and in the case of vulnerabilities, for the benefit of the broader community through improved defensive posture. As with any network monitoring tool, Joy could potentially be misused; do not use it on any network of which you are not the owner or the administrator.Flow, in positive psychology, is a state in which a person performing an activity is fully immersed in a feeling of energized focus, deep involvement, and joy. This second meaning inspired the choice of name for this software package.Joy is alpha/beta software; we hope that you use it and benefit from it, but do understand that it is not suitable for production use.TLS FingerprintingWe have recently released the largest and most informative open source TLS fingerprint database. Among other features, our approach builds on previous work by being fully automated and annotating TLS fingerprints with significantly more information. We have built a set of python tools to enable the application of this database, as well as the generation of new databases with the help of Joy. For more information, please see the TLS fingerprinting documentation.Relation to Cisco ETAJoy has helped support the research that paved the way for Cisco’s Encrypted Traffic Analytics (ETA), but it is not directly integrated into any of the Cisco products or services that implement ETA. The classifiers in Joy were trained on a small dataset several years ago, and do not represent the classification methods or performance of ETA. The intent of this feature is to allow network researchers to quickly train and deploy their own classifiers on a subset of the data features that Joy produces. For more information on training your own classifier, see saltUI/README or reach out to joy-users@cisco.com.CreditsThis package was written by David McGrew, Blake Anderson, Philip Perricone and Bill Hudson {mcgrew,blaander,phperric,bhudson}@cisco.com of Cisco Systems Advanced Security Research Group (ASRG) and Security and Trust Organization (STO).Release 4.3.0Add IPv6 support to Joy and libjoyIPFix collection and export only support IPv4NFv9 only supports IPv4Anonymization only supports IPv4 addressesSubnet labeling only supports IPv4 addressesRelease 4.2.0Re-write joy.c to use libjoy libraryUpdated joy.c to utilize multi-threads for flow processingUpdated unit tests and python tests to reflect new code changesRemoved guts of the updater process to prepare for re-writeFixed bug in processing multiple files on the command lineOther minor bug fixesRelease 4.0.3Added support for make install for CentosRelease 4.0.2Add support for fingerprintingRelease 4.0.1We are pleased to announce the 4.0.1 release of the package, which has these features:Add additional API’s for parent application processing of Flow Records and data featuresFixed TCP retransmission and out of order detectionBetter identification of IDP packetFixed some memory usage issuesFixed minor bugsRemoved dead codeRelease 4.0.0We are pleased to announce the 4.0.0 release of the package, which has these features:Add support for building with autotools. ./configure;make clean;makeRelease 3.0.0We are pleased to announce the 3.0.0 release of the package, which has these features:Modified JOY infrastructure code to be thread safe. Allowed support multiple work threads for packet processing.Each worker thread uses own output file.Removed global variables for Config.Modified code infrastructure to use Config Structure.Modified the Makefile system to build the JOY infrastructure as a static and shared library.Implemented an API for utilizing the JOY Library (joy_api.[hc]).Implemented a Vector Packet Processing integration scheme to utilize VPP native infrastructure when building that integration.Created 2 API test programs, joy_api_test.c and joy_api_test2.c.Modified existing test programs to link against static JOY library instead of re-compiling the infrastructure code.Modified versioning to use Common Security Module (CSM) conventions.Modified build_pkg to accept package version on the command line.Cleaned up coverity errors and warnings.Various bug fixes.Release 2.0We are pleased to announce the 2.0 release of the package, which has these features:The JSON schema has been updated to be better organized, more readable, and more searchable (by putting searchable keywords as the JSON names),The new sleuth tool replaces query/joyq, and brings new functionality such as —fingerprint,Much improved documentation, which covers the joy and sleuth tools, examples, and the JSON schema (see using-joy)Quick StartJoy has been successfully run and tested on Linux (Debian, Ubuntu, CentOS, and Raspbian), Mac OS X and Windows. The system has been built with gcc and GNU make, but it should work with other development environments as well.Go to the Wiki for a guide on building: Build InstructionsDownload Joy

Link: http://www.kitploit.com/2019/05/joy-package-for-capturing-and-analyzing.html

ADAPT – Tool That Performs Automated Penetration Testing For WebApps

ADAPT is a tool that performs Automated Dynamic Application Penetration Testing for web applications. It is designed to increase accuracy, speed, and confidence in penetration testing efforts. ADAPT automatically tests for multiple industry standard OWASP Top 10 vulnerabilities, and outputs categorized findings based on these potential vulnerabilities. ADAPT also uses the functionality from OWASP ZAP to perform automated active and passive scans, and auto-spidering. Due to the flexible nature of the ADAPT tool, all of theses features and tests can be enabled or disabled from the configuration file. For more information on tests and configuration, please visit the ADAPT wiki.How it WorksADAPT uses Python to create an automated framework to use industry standard tools, such as OWASP ZAP and Nmap, to perform repeatable, well-designed procedures with anticipated results to create an easly understandable report listing vulnerabilities detected within the web application.Automated Tests:* OTG-IDENT-004 – Account Enumeration* OTG-AUTHN-001 – Testing for Credentials Transported over an Encrypted Channel* OTG-AUTHN-002 – Default Credentials* OTG-AUTHN-003 – Testing for Weak lock out mechanism* OTG-AUTHZ-001 – Directory Traversal* OTG-CONFIG-002 – Test Application Platform Configuration* OTG-CONFIG-006 – Test HTTP Methods* OTG-CRYPST-001 – Testing for Weak SSL/TLS Ciphers, Insufficient Transport Layer Protection* OTG-CRYPST-002 – Testing for Padding Oracle* OTG-ERR-001 – Testing for Error Code* OTG-ERR-002 – Testing for Stack Traces* OTG-INFO-002 – Fingerprinting the Webserver* OTG-INPVAL-001 – Testing for Reflected Cross site scripting* OTG-INPVAL-002 – Testing for Stored Cross site scripting* OTG-INPVAL-003 – HTTP Verb Tampering* OTG-SESS-001 – Testing for Session Management Schema* OTG-SESS-002 – Cookie AttributesInstalling the PluginDetailed install instructions.Download Adapt

Link: http://www.kitploit.com/2019/01/adapt-tool-that-performs-automated.html

Aztarna – A Footprinting Tool For Robots

This repository contains Alias Robotics’ aztarna, a footprinting tool for robots.Alias Robotics supports original robot manufacturers assessing their security and improving their quality of software. By no means we encourage or promote the unauthorized tampering with running robotic systems. This can cause serious human harm and material damages.For ROSA list of the ROS nodes present in the system (Publishers and Subscribers)For each node, the published and subscribed topis including the topic typeFor each node, the ROS services each of the nodes offerA list of all ROS parameters present in the Parameter ServerA list of the active communications running in the system. A single communication includes the involved publiser/subscriber nodes and the topicsFor SROSDetermining if the system is a SROS master.Detecting if demo configuration is in use.A list of the nodes found in the system. (Extended mode)A list of allow/deny policies for each node.Publishable topics.Subscriptable topics.Executable services.Readable parameters.For Industrial routersDetecting eWON, Moxa, Sierra Wireless and Westermo industrial routers.Default credential checking for found routers.InstallingFor productionDirecly from PyPipip3 install aztarnaor from the repository:pip3 install .For developmentpip3 install -e .orpython3 setup.py developPython 3.7 and the setuptools package is required for installation.Install with dockerdocker build -t aztarna_docker .Code usage:usage: aztarna [-h] -t TYPE [-a ADDRESS] [-p PORTS] [-i INPUT_FILE] [-o OUT_FILE] [-e] [-r RATE] [–shodan] [–api-key API_KEY]Aztarnaoptional arguments: -h, –help show this help message and exit -t TYPE, –type TYPE Scan ROS, SROS hosts or Industrial routers -a ADDRESS, –address ADDRESS Single address or network range to scan. -p PORTS, –ports PORTS Ports to scan (format: 13311 or 11111-11155 or 1,2,3,4) -i INPUT_FILE, –input_file INPUT_FILE Input file of addresses to use for scanning -o OUT_FILE, –out_file OUT_FILE Output file for the results -e, –extended Extended scan of the hosts -r RATE, –rate RATE Maximum simultaneous network connections –shodan Use shodan for the scan types that support it. –api-key API_KEY Shodan API KeyRun the code (example input file):aztarna -t ROS -p 11311 -i ros_scan_s20.csvRun the code with Docker (example input file):docker run -v <host_path>:/root -it aztarna_docker -t ROS -p 11311 -i <input_file>Run the code (example single ip address):aztarna -t ROS -p 11311 -a 115.129.241.241Run the code (example subnet):aztarna -t ROS -p 11311 -a 115.129.241.0/24Run the code (example single ip address, port range):aztarna -t ROS -p 11311-11500 -a 115.129.241.241Run the code (example single ip address, port list):aztarna -t ROS -p 11311,11312,11313 -a 115.129.241.241Run the code (example piping directly from zmap):zmap -p 11311 0.0.0.0/0 -q | aztarna -t SROS -p 11311Run the code (example search for industrial routers in shodan)aztarna -t IROUTERS –shodan –api-key <yourshodanapikey>Run the code (example search for industrial routers in shodan, piping to file)aztarna -t IROUTERS –shodan –api-key <yourshodanapikey> -o routers.csvDownload Aztarna

Link: http://feedproxy.google.com/~r/PentestTools/~3/Q9CYfShlqRA/aztarna-footprinting-tool-for-robots.html

LightBulb Framework – Tools For Auditing WAFS

LightBulb is an open source python framework for auditing web application firewalls and filters.SynopsisThe framework consists of two main algorithms: GOFA: An active learning algorithm that infers symbolic representations of automata in the standard membership/equivalence query model.Active learning algorithms permits the analysis of filter and sanitizer programs remotely, i.e. given only the ability to query the targeted program and observe the output. SFADiff: A black-box differential testing algorithm based on Symbolic Finite Automata (SFA) learningFinding differences between programs with similar functionality is an important security problem as such differences can be used for fingerprinting or creating evasion attacks against security software like Web Application Firewalls (WAFs) which are designed to detect malicious inputs to web applications.MotivationWeb Applications Firewalls (WAFs) are fundamental building blocks of modern application security. For example, the PCI standard for organizations handling credit card transactions dictates that any application facing the internet should be either protected by a WAF or successfully pass a code review process. Nevertheless, despite their popularity and importance, auditing web application firewalls remains a challenging and complex task. Finding attacks that bypass the firewall usually requires expert domain knowledge for a specific vulnerability class. Thus, penetration testers not armed with this knowledge are left with publicly available lists of attack strings, like the XSS Cheat Sheet, which are usually insufficient for thoroughly evaluating the security of a WAF product.Commands UsageMain interface commands: Command Description core Shows available core modules utils Shows available query handlers info Prints module information library Enters library modules Shows available application modules use <module> Enters module start <moduleA> <moduleB> Initiate algorithm help Prints help status Checks and installs required packages complete Prints bash completion command Module commands: Command Description back Go back to main menu info Prints current module information library Enters library options Shows available options define <option> <value> Set an option value start Initiate algoritm complete Prints bash completion command Library commands: Command Description back Go back to main menu info <folder\module> Prints requested module information (folder must be located in lightbulb/data/) cat <folder\module> Prints requested module (folder must be located in lightbulb/data/) modules <folder> Shows available library modules in the requested folder (folder must be located in lightbulb/data/) search <keywords> Searches available library modules using comma separated keywords complete Prints bash completion command InstallationPrepare your systemFirst you have to verify that your system supports flex, python dev, pip and build utilities:For apt platforms (ubuntu, debian…): sudo apt-get install flex sudo apt-get install python-pip sudo apt-get install python-dev sudo apt-get install build-essential(Optional for apt) If you want to add support for MySQL testing: sudo apt-get install libmysqlclient-devFor yum platforms (centos, redhat, fedora…) with already installed the extra packages repo (epel-release): sudo yum install -y python-pip sudo yum install -y python-devel sudo yum install -y wget sudo yum groupinstall -y ‘Development Tools'(Optional for yum) If you want to add support for MySQL testing: sudo yum install -y mysql-devel sudo yum install -y MySQL-pythonInstall LightbulbIn order to use the application without complete package installation:git clone https://github.com/lightbulb-framework/lightbulb-frameworkcd lightbulb-frameworkmakelightbulb statusIn order to perform complete package installation. You can also install it from pip repository. This requires first to install the latest setuptools version:pip install setuptools –upgradepip install lightbulb-frameworklightbulb statusIf you want to use virtualenv:pip install virtualenvvirtualenv envsource env/bin/activatepip install lightbulb-frameworklightbulb statusThe “lightbulb status" command will guide you to install MySQLdb and OpenFst support. If you use virtualenv in linux, the "sudo" command will be required only for the installation of libmysqlclient-dev package.It should be noted that the "lightbulb status" command is not necessary if you are going to use the Burp Extension. The reason is that this command installs the "openfst" and "mysql" bindings and the extension by default is using Jython, which does not support C bindings. It is recommended to use the command only if you want to change the Burp extension configuration from the settings and enable the native support.It is also possible to use a docker instance:docker pull lightbulb/lightbulb-frameworkInstall Burp ExtensionIf you wish to use the new GUI, you can use the extension for the Burp Suite. First you have to setup a working environment with Burp Proxy and JythonDownload the latest Jython from hereFind your local python packages installation folder*Configure Burp Extender to use these values, as shown below*Select the new LightBulb module ("BurpExtension.py") and set the extension type to be "Python"*You can ignore this step, and install the standalone version which contains all the required python packages included. You can download it hereExamplesCheck out the Wiki page for usage examples.ContributorsGeorge ArgyrosIoannis StaisSuman JanaAngelos D. KeromytisAggelos KiayiasReferencesG. Argyros, I. Stais, S. Jana, A. D. Keromytis, and A. Kiayias. 2016. SFADiff: Automated Evasion Attacks and Fingerprinting Using Black-box Differential Automata Learning. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security (CCS ’16). ACM, New York, NY, USA, 1690-1701. doi: 10.1145/2976749.2978383G. Argyros, I. Stais, A. Kiayias and A. D. Keromytis, "Back in Black: Towards Formal, Black Box Analysis of Sanitizers and Filters," 2016 IEEE Symposium on Security and Privacy (SP), San Jose, CA, 2016, pp. 91-109. doi: 10.1109/SP.2016.14Download Lightbulb-Framework

Link: http://www.kitploit.com/2018/12/lightbulb-framework-tools-for-auditing.html

HASSH – A Network Fingerprinting Standard Which Can Be Used To Identify Specific Client And Server SSH Implementations

“HASSH" is a network fingerprinting standard which can be used to identify specific Client and Server SSH implementations. The fingerprints can be easily stored, searched and shared in the form of an MD5 fingerprint.What can HASSH help with:Use in highly controlled, well understood environments, where any fingerprints outside of a known good set are alertable.It is possible to detect, control and investigate brute force or Cred Stuffing password attempts at a higher level of granularity than IP Source – which may be impacted by NAT or botnet-like behaviour. The hassh will be a feature of the specific Client software implementation being used, even if the IP is NATed such that it is shared by many other SSH clients.Detect covert exfiltration of data within the components of the Client algorithm sets. In this case, a specially coded SSH Client can send data outbound from a trusted to a less trusted environment within a series of SSH_MSG_KEXINIT packets. In a scenario similar to the more known exfiltration via DNS, data could be sent as a series of attempted, but incomplete and unlogged connections to an SSH server controlled by bad actors who can then record, decode and reconstitute these pieces of data into their original form. Until now such attempts – much less the contents of the clear text packets – are not logged even by mature packet analyzers or on end point systems. Detection of this style of exfiltration can now be performed easily by using anomaly detection or alerting on SSH Clients with multiple different hasshUse in conjunction with other contextual indicators, for example detect Network discovery and Lateral movement attempts by unusual hassh such as those used by Paramiko, Powershell, Ruby, Meterpreter, Empire.Share malicious hassh as Indicators of Compromise.Create an additional level of Client application control, for example one could block all Clients from connecting to an SSH server that are outside of an approved known set of hassh values.Contribute to Non Repudiation in a Forensic context – at a higher level of abstraction than IPSource – which may be impacted by NAT, or where multiple IP Sources are used.Detect Deceptive Applications. Eg a hasshServer value known to belong to the Cowry/Kippo SSH honeypot server installation, which is purporting to be a common OpenSSH server in the Server String.Detect devices having a hassh known to belong to IOT embedded systems. Examples may include cameras, mics, keyloggers, wiretaps that could be easily be hidden from view and communicating quietly over encrypted channels back to a control server.How does HASSH work:"hassh" and "hasshServer" are MD5 hashes constructed from a specific set of algorithms that are supported by various SSH Client and Server Applications. These algorithms are exchanged after the initial TCP three-way handshake as clear-text packets known as "SSH_MSG_KEXINIT" messages, and are an integral part of the setup of the final encrypted SSH channel. The existence and ordering of these algorithms is unique enough such that it can be used as a fingerprint to help identify the underlying Client and Server application or unique implementation, regardless of higher level ostensible identifiers such as "Client" or "Server" strings.References:RFC4253 The Secure Shell (SSH) Transport Layer ProtocolSalesforce Engineering blogCredits:hassh and hasshServer were conceived and developed by (@benreardon) within the Detection Cloud Team at Salesforce, with inspiration and contributions from (@0x4d31) and the JA3 crew crew: , and Download Hassh

Link: http://feedproxy.google.com/~r/PentestTools/~3/K_mGl9HjOe4/hassh-network-fingerprinting-standard.html

Scannerl – The Modular Distributed Fingerprinting Engine

Scannerl is a modular distributed fingerprinting engine implemented by Kudelski Security. Scannerl can fingerprint thousands of targets on a single host, but can just as easily be distributed across multiple hosts. Scannerl is to fingerprinting what zmap is to port scanning.Scannerl works on Debian/Ubuntu/Arch (but will probably work on other distributions as well). It uses a master/slave architecture where the master node will distribute the work (host(s) to fingerprint) to its slaves (local or remote). The entire deployment is transparent to the user.Why use ScannerlWhen using conventional fingerprinting tools for large-scale analysis, security researchers will often hit two limitations: first, these tools are typically built for scanning comparatively few hosts at a time and are inappropriate for large ranges of IP addresses. Second, if large range of IP addresses protected by IPS devices are being fingerprinted, the probability of being blacklisted is higher what could lead to an incomplete set of information. Scannerl is designed to circumvent these limitations, not only by providing the ability to fingerprint multiple hosts simultaneously, but also by distributing the load across an arbitrary number of hosts. Scannerl also makes the distribution of these tasks completely transparent, which makes setup and maintenance of large-scale fingerprinting projects trivial; this allows to focus on the analyses rather than the herculean task of managing and distributing fingerprinting processes by hand. In addition to the speed factor, scannerl has been designed to allow to easily set up specific fingerprinting analyses in a few lines of code. Not only is the creation of a fingerprinting cluster easy to set up, but it can be tweaked by adding fine-tuned scans to your fingerprinting campaigns.It is the fastest tool to perform large scale fingerprinting campaigns.For more:Fingerprint all the things with scannerl at BlackAlpsFingerprinting MySQL with scannerlFingerprint ICS/Scada with scannerlDistributed fingerprinting with scannerl6 months of ICS scanningInstallationSee the different installation options under wiki installation pageTo install from source, first install Erlang (at least v.18) by choosing the right packaging for your platform: Erlang downloadsInstall the required packages:# on debian$ sudo apt install erlang erlang-src rebar# on arch$ sudo pacman -S erlang-nox rebarThen build scannerl:$ git clone https://github.com/kudelskisecurity/scannerl.git$ cd scannerl$ ./build.shGet the usage by running$ ./scannerl -hScannerl is available on aur for arch linux usersscannerlscannerl-gitDEBs (Ubuntu, Debian) are available in the releases.RPMs (Opensuse, Centos, Redhat) are available under https://build.opensuse.org/package/show/home:chapeaurouge/scannerl.Distributed setupTwo types of nodes are needed to perform a distributed scan:Master node: this is where scannerl’s binary is runSlave node(s): this is where scannerl will connect to distribute all its workThe master node needs to have scannerl installed and compiled while the slave node(s) only needs Erlang to be installed. The entire setup is transparent and done automatically by the master node.Requirements for a distributed scan:All hosts have the same version of Erlang installedAll hosts are able to connect to each other using SSH public keyAll hosts’ names resolve (use /etc/hosts if no proper DNS is setup)All hosts have the same Erlang security cookieAll hosts must allow connection to Erlang EPMD port (TCP/4369)All hosts have the following range of ports opened: TCP/11100 to TCP/11100 + number-of-slavesUsage$ ./scannerl -h ____ ____ _ _ _ _ _ _____ ____ _ / ___| / ___| / \ | \ | | \ | | ____| _ \| | \___ \| | / _ \ | \| | \| | _| | |_) | | ___) | |___ / ___ \| |\ | |\ | |___| _ <| |___ |____/ \____/_/ \_\_| \_|_| \_|_____|_| \_\_____|USAGE scannerl MODULE TARGETS [NODES] [OPTIONS] MODULE: -m <mod> –module <mod> mod: the fingerprinting module to use. arguments are separated with a colon. TARGETS: -f <target> –target <target> target: a list of target separated by a comma. -F <path> –target-file <path> path: the path of the file containing one target per line. -d <domain> –domain <domain> domain: a list of domains separated by a comma. -D <path> –domain-file <path> path: the path of the file containing one domain per line. NODES: -s <node> –slave <node> node: a list of node (hostnames not IPs) separated by a comma. -S <path> –slave-file <path> path: the path of the file containing one node per line. a node can also be supplied with a multiplier (<node>*<nb>). OPTIONS: -o <mod> –output <mod> comma separated list of output module(s) to use. -p <port> –port <port> the port to fingerprint. -t <sec> –timeout <sec> the fingerprinting process timeout. -T <sec> –stimeout <sec> slave connection timeout (default: 10). -j <nb> –max-pkt <nb> max pkt to receive (int or “infinity"). -r <nb> –retry <nb> retry counter (default: 0). -c <cidr> –prefix <cidr> sub-divide range with prefix > cidr (default: 24). -M <port> –message <port> port to listen for message (default: 57005). -P <nb> –process <nb> max simultaneous process per node (default: 28232). -Q <nb> –queue <nb> max nb unprocessed results in queue (default: infinity). -C <path> –config <path> read arguments from file, one per line. -O <mode> –outmode <mode> 0: on Master, 1: on slave, >1: on broker (default: 0). -v <val> –verbose <val> be verbose (0 <= int <= 255). -K <opt> –socket <opt> comma separated socket option (key[:value]). -l –list-modules list available fp/out modules. -V –list-debug list available debug options. -A –print-args Output the args record. -X –priv-ports use only source port between 1 and 1024. -N –nosafe keep going even if some slaves fail to start. -w –www DNS will try for www.<domain>. -b –progress show progress. -x –dryrun dry run.See the wiki for more.Standalone usageScannerl can be used on the local host without any other host. However, it will still create a slave node on the same host it is run from. Therefore, the requirements described in Distributed setup must also be met.A quick way to do this is to make sure your host is able to resolve itself withgrep -q "127.0.1.1\s*`hostname`" /etc/hosts || echo "127.0.1.1 `hostname`" | sudo tee -a /etc/hostsand create an SSH key (if not yet present) and add it to the authorized_keys (you need an SSH server running):cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keysThe following example runs an HTTP banner grabing on google.com from localhost./scannerl -m httpbg -d google.comDistributed usageIn order to perform a distributed scan, one need to pre-setup the hosts that will be used by scannerl to distribute the work. See Distributed setup for more information.Scannerl expects a list of slaves to use (provided by the -s or -S switches)../scannerl -m httpbg -d google.com -s host1,host2,host3List available modulesScannerl will list the available modules (output modules as well as fingerprinting modules) with the -l switch:$ ./scannerl -lFingerprinting modules available================================bacnet UDP/47808: Bacnet identificationchargen UDP/19: Chargen amplification factor identificationfox TCP/1911: FOX identificationhttpbg TCP/80: HTTP Server header identification – Arg1: [true|false] follow redirection [Default:false]httpsbg SSL/443: HTTPS Server header identificationhttps_certif SSL/443: HTTPS certificate graberimap_certif TCP/143: IMAP STARTTLS certificate grabermodbus TCP/502: Modbus identificationmqtt TCP/1883: MQTT identificationmqtts TCP/8883: MQTT over SSL identificationmysql_greeting TCP/3306: Mysql version identificationpop3_certif TCP/110: POP3 STARTTLS certificate grabersmtp_certif TCP/25: SMTP STARTTLS certificate graberssh_host_key TCP/22: SSH host key graberOutput modules available========================csv output to csv – Arg1: [true|false] save everything [Default:true]csvfile output to csv file – Arg1: [true|false] save everything [Default:false] – Arg2: File pathfile output to file – Arg1: File pathfile_ip output to stdout (only ip) – Arg1: File pathfile_mini output to file (only ip and result) – Arg1: File pathfile_resultonly output to file (only result) – Arg1: File pathstdout output to stdoutstdout_ip output to stdout (only IP)stdout_mini output to stdout (only ip and result)Modules argumentsArguments can be provided to modules with a colon. For example for the file output module:./scannerl -m httpbg -d google.com -o file:/tmp/resultResult formatThe result returned by scannerl to the output modules has the following form:{module, target, port, result}Wheremodule: the module used (Erlang atom)target: IP or hostname (string or IPv4 address)port: the port (integer)result: see belowThe result part is of the form:{{status, type},Value}Where {status, type} is one of the following tuples:{ok, result}: fingerprinting the target succeeded{error, up}: fingerprinting didn’t succeed but the target responded{error, unknown}: fingerprinting failedValue is the returned value – it is either an atom or a list of elementExtending ScannerlScannerl has been designed and implemented with modularity in mind. It is easy to add new modules to it:Fingerprinting module: to query a specific protocol or service. As an example, the fp_httpbg.erl module allows to retrieve the server entry in the HTTP response.Output module: to output to a specific database/filesystem or output the result in a specific format. For example, the out_file.erl and out_stdout.erl modules allow respectively to output to a file or to stdout (default behavior if not specified).To create new modules, simply follow the behavior (fp_module.erl for fingerprinting modules and out_behavior.erl for output module) and implement your modules.New modules can either be added at compile time or dynamically as an external file.See the wiki page for more.Download Scannerl

Link: http://feedproxy.google.com/~r/PentestTools/~3/rR7h1XIp-fk/scannerl-modular-distributed.html