pyHAWK – Searches The Directory Of Choice For Interesting Files. Such As Database Files And Files With Passwords Stored On Them

Searches the directory of choice for interesting files. Such as database files and files with passwords stored on them FeaturesScans directory for intresting file typesOutputs them to the screenSupports many file typesInstallation InstructionsThe installation is easy. Git clone the repo and run go build.git clone https://github.com/MetaChar/pyHAWKpython2 main.pyUsageTo set a Directory use -d or –directorypython2 main.py -d File ExtensionsCryptography.pem.pkcs12.p12.pfx.asc.jks.keychainPassword Files.agilekeychain.kwallet.bek.tpm.psafe3Database Files.sdf.sqlite.fve.pcap.gnucash.dayone.mdfMisc Files.logConfig Files.cscfg.rdp.tblk.ovpnFile NamesPassword Filescredentials.xmlrobomongo.jsonfilezilla.xmlrecentservers.xmlventrilo_srv.initerraform.tfvarssecret_token.rbcarrierwave.rbomniauth.rbsettings.pydatabase.ymlDatabase Filesjournal.txtFavorites.plistMisc Filesroot.txtusers.txtpasswords.txtlogin.txtConfig Filesjenkins.plugins.publish_over_ssh.BapSshPublisherPlugin.xmlLocalSettings.phpconfiguration.user.xplknife.rbInsperationInspired by ice3man543 check it out here : https://github.com/Ice3man543/hawkeyeDownload pyHAWK

Link: http://feedproxy.google.com/~r/PentestTools/~3/G9hiIRMsIzI/pyhawk-searches-directory-of-choice-for.html

Deep Explorer – Tool Which Purpose Is The Search Of Hidden Services In Tor Network, Using Ahmia Browser And Crawling The Links Obtained

Dependencies pip3 install -r requirements.txtalso you should have Tor installedUsagepython3 deepexplorer.py STRING_TO_SEARCH NUMBER_OF_RESULTS TYPE_OF_CRAWLExamples:python3 deepexplorer.py “legal thing" 40 default legal (will crawl if results obtained in browser do not reach 40, also the script will show links which have "legal" string in html [like intext dork in google])python3 deepexplorer.py "ilegal thing" 30 all dni(will crawl every link obtained in browser [ultil reachs 30], also the script will show links which have "dni" string in html [like intext dork in google])python3 deepexplorer.py "legal thing" 30 none (do not crawl, only obtain links from browser)AboutDeep Explorer is a tool designed to search (any) thing in a few secondsAny idea, failure etc please report to telegram: blueudpresults.txt contains results obtaioned in previus searchTested in ParrotOS and Kali Linux 2.0Type of ErrorsError importing… -> You should try manual pip install packageError connecting to server -> Cant connect to ahmia browser If deep explorer can not execute service …, do it manually, deep explorer checks the tor instance at the beginning so it will skip that partContactName: Eduardo Pérez-MalumbresTelegram: @blueudpTwitter: https://twitter.com/blueudpDownload Deep-Explorer

Link: http://feedproxy.google.com/~r/PentestTools/~3/Uky3GEJ7r8k/deep-explorer-tool-which-purpose-is.html

R3Con1Z3R – A Lightweight Web Information Gathering Tool With An Intuitive Features (OSINT)

R3con1z3r is a lightweight Web information gathering tool with an intuitive features written in python. it provides a powerful environment in which open source intelligence (OSINT) web-based footprinting can be conducted quickly and thoroughly.Footprinting is the first phase of ethical hacking, its the collection of every possible information regarding the target. R3con1z3r is a passive reconnaissance tool with built-in functionalities which includes: HTTP header flag, Traceroute, Whois Footprinting, DNS information, Site on same server, Nmap port scanner, Reverse Target and hyperlinks on a webpage. The tool, after being provided with necessary inputs generates an output in HTML format.ScreenshotsInstallationr3con1z3r supports Python 2 and Python 3.$ git clone https://github.com/abdulgaphy/r3con1z3r.git$ cd r3con1z3r$ pip install -r requirements.txtOptional for Linux users$ sudo chmod +x r3con1z3r.pyModuldesr3con1z3r depends only on the sys and the requests python modules.Python 3: $ pip3 install -r requirements.txtFor Coloring on Windows: pip install win_unicode_console coloramaUsagepython3 r3con1z3r.py [domain.com]ExamplesTo run on all Operating Systems (Linux, Windows, Mac OS X, Android e.t.c) i.e Python 2 environmentpython r3con1z3r.py google.comTo run on python3 environment:python3 r3con1z3r.py facebook.comTo run as executable Unix only./r3con1z3r.py google.comDownload R3Con1Z3R

Link: http://feedproxy.google.com/~r/PentestTools/~3/xpd1vC23W3c/r3con1z3r-lightweight-web-information.html

ZIP Shotgun – Utility Script To Test Zip File Upload Functionality (And Possible Extraction Of Zip Files) For Vulnerabilities

Utility script to test zip file upload functionality (and possible extraction of zip files) for vulnerabilities. Idea for this script comes from this post on Silent Signal Techblog – Compressed File Upload And Command Execution and from OWASP – Test Upload of Malicious FilesThis script will create archive which contains files with “../" in filename. When extracting this could cause files to be extracted to preceding directories. It can allow attacker to extract shells to directories which can be accessed from web browser.Default webshell is wwwolf’s PHP web shell and all the credit for it goes to WhiteWinterWolf. Source is available HEREInstallationInstall using Python pip pip install zip-shotgun –upgrade Clone git repository and install git clone https://github.com/jpiechowka/zip-shotgun.gitExecute from root directory of the cloned repository (where setup.py file is located) pip install . –upgrade Usage and optionsUsage: zip-shotgun [OPTIONS] OUTPUT_ZIP_FILEOptions: –version Show the version and exit. -c, –directories-count INTEGER Count of how many directories to go back inside the zip file (e.g 3 means that 3 files will be added to the zip: shell.php, ../shell.php and ../../shell.php where shell.php is the name of the shell you provided or randomly generated value [default: 16] -n, –shell-name TEXT Name of the shell inside the generated zip file (e.g shell). If not provided it will be randomly generated. Cannot have whitespaces -f, –shell-file-path PATH A file that contains code for the shell. If this option is not provided wwwolf (https://github.com/WhiteWinterWolf/wwwolf- php-webshell) php shell will be added instead. If name is provided it will be added to the zip with the provided name or if not provided the name will be randomly generated. –compress Enable compression. If this flag is set archive will be compressed using DEFALTE algorithm with compression level of 9. By default there is no compression applied. -h, –help Show this message and exit.ExamplesUsing all default options zip-shotgun archive.zipPart of the script output 12/Dec/2018 Wed 23:13:13 +0100 | INFO | Opening output zip file: REDACTED\zip-shotgun\archive.zip12/Dec/2018 Wed 23:13:13 +0100 | WARNING | Shell name was not provided. Generated random shell name: BCsQOkiN23ur7OUj12/Dec/2018 Wed 23:13:13 +0100 | WARNING | Shell file was not provided. Using default wwwolf’s webshell code12/Dec/2018 Wed 23:13:13 +0100 | INFO | Using default file extension for wwwolf’s webshell: php12/Dec/2018 Wed 23:13:13 +0100 | INFO | –compress flag was NOT set. Archive will be uncompressed. Files will be only stored.12/Dec/2018 Wed 23:13:13 +0100 | INFO | Writing file to the archive: BCsQOkiN23ur7OUj.php12/Dec/2018 Wed 23:13:13 +0100 | INFO | Setting full read/write/execute permissions (chmod 777) for file: BCsQOkiN23ur7OUj.php12/Dec/2018 Wed 23:13:13 +0100 | INFO | Writing file to the archive: ../BCsQOkiN23ur7OUj.php12/Dec/2018 Wed 23:13:13 +0100 | INFO | Setting full read/write/execute permissions (chmod 777) for file: ../BCsQOkiN23ur7OUj.php12/Dec/2018 Wed 23:13:13 +0100 | INFO | Writing file to the archive: ../../BCsQOkiN23ur7OUj.php12/Dec/2018 Wed 23:13:13 +0100 | INFO | Setting full read/write/execute permissions (chmod 777) for file: ../../BCsQOkiN23ur7OUj.php…12/Dec/2018 Wed 23:13:13 +0100 | INFO | Finished. Try to access shell using BCsQOkiN23ur7OUj.php in the URLUsing default options and enabling compression for archive file zip-shotgun –compress archive.zipPart of the script output 12/Dec/2018 Wed 23:16:13 +0100 | INFO | Opening output zip file: REDACTED\zip-shotgun\archive.zip12/Dec/2018 Wed 23:16:13 +0100 | WARNING | Shell name was not provided. Generated random shell name: 6B6NtnZXbXSubDCh12/Dec/2018 Wed 23:16:13 +0100 | WARNING | Shell file was not provided. Using default wwwolf’s webshell code12/Dec/2018 Wed 23:16:13 +0100 | INFO | Using default file extension for wwwolf’s webshell: php12/Dec/2018 Wed 23:16:13 +0100 | INFO | –compress flag was set. Archive will be compressed using DEFLATE algorithm with a level of 9…12/Dec/2018 Wed 23:16:13 +0100 | INFO | Finished. Try to access shell using 6B6NtnZXbXSubDCh.php in the URLUsing default options but changing the number of directories to go back in the archive to 3 zip-shotgun –directories-count 3 archive.zip zip-shotgun -c 3 archive.zipThe script will write 3 files in total to the archivePart of the script output 12/Dec/2018 Wed 23:17:43 +0100 | INFO | Opening output zip file: REDACTED\zip-shotgun\archive.zip12/Dec/2018 Wed 23:17:43 +0100 | WARNING | Shell name was not provided. Generated random shell name: 34Bv9YoignMHgk2F12/Dec/2018 Wed 23:17:43 +0100 | WARNING | Shell file was not provided. Using default wwwolf’s webshell code12/Dec/2018 Wed 23:17:43 +0100 | INFO | Using default file extension for wwwolf’s webshell: php12/Dec/2018 Wed 23:17:43 +0100 | INFO | –compress flag was NOT set. Archive will be uncompressed. Files will be only stored.12/Dec/2018 Wed 23:17:43 +0100 | INFO | Writing file to the archive: 34Bv9YoignMHgk2F.php12/Dec/2018 Wed 23:17:43 +0100 | INFO | Setting full read/write/execute permissions (chmod 777) for file: 34Bv9YoignMHgk2F.php12/Dec/2018 Wed 23:17:43 +0100 | INFO | Writing file to the archive: ../34Bv9YoignMHgk2F.php12/Dec/2018 Wed 23:17:43 +0100 | INFO | Setting full read/write/execute permissions (chmod 777) for file: ../34Bv9YoignMHgk2F.php12/Dec/2018 Wed 23:17:43 +0100 | INFO | Writing file to the archive: ../../34Bv9YoignMHgk2F.php12/Dec/2018 Wed 23:17:43 +0100 | INFO | Setting full read/write/execute permissions (chmod 777) for file: ../../34Bv9YoignMHgk2F.php12/Dec/2018 Wed 23:17:43 +0100 | INFO | Finished. Try to access shell using 34Bv9YoignMHgk2F.php in the URLUsing default options but providing shell name inside archive and enabling compressionShell name cannot have whitespaces zip-shotgun –shell-name custom-name –compress archive.zip zip-shotgun -n custom-name –compress archive.zipName for shell files inside the archive will be set to the one provided by the user.Part of the script output 12/Dec/2018 Wed 23:19:12 +0100 | INFO | Opening output zip file: REDACTED\zip-shotgun\archive.zip12/Dec/2018 Wed 23:19:12 +0100 | WARNING | Shell file was not provided. Using default wwwolf’s webshell code12/Dec/2018 Wed 23:19:12 +0100 | INFO | Using default file extension for wwwolf’s webshell: php12/Dec/2018 Wed 23:19:12 +0100 | INFO | –compress flag was set. Archive will be compressed using DEFLATE algorithm with a level of 912/Dec/2018 Wed 23:19:12 +0100 | INFO | Writing file to the archive: custom-name.php12/Dec/2018 Wed 23:19:12 +0100 | INFO | Setting full read/write/execute permissions (chmod 777) for file: custom-name.php12/Dec/2018 Wed 23:19:12 +0100 | INFO | Writing file to the archive: ../custom-name.php12/Dec/2018 Wed 23:19:12 +0100 | INFO | Setting full read/write/execute permissions (chmod 777) for file: ../custom-name.php12/Dec/2018 Wed 23:19:12 +0100 | INFO | Writing file to the archive: ../../custom-name.php12/Dec/2018 Wed 23:19:12 +0100 | INFO | Setting full read/write/execute permissions (chmod 777) for file: ../../custom-name.php12/Dec/2018 Wed 23:19:12 +0100 | INFO | Writing file to the archive: ../../../custom-name.php…12/Dec/2018 Wed 23:19:12 +0100 | INFO | Finished. Try to access shell using custom-name.php in the URLProvide custom shell file but use random name inside archive. Set directories count to 3 zip-shotgun –directories-count 3 –shell-file-path ./custom-shell.php archive.zip zip-shotgun -c 3 -f ./custom-shell.php archive.zipShell code will be extracted from user provided file. Names inside the archive will be randomly generated.Part of the script output 12/Dec/2018 Wed 23:21:37 +0100 | INFO | Opening output zip file: REDACTED\zip-shotgun\archive.zip12/Dec/2018 Wed 23:21:37 +0100 | WARNING | Shell name was not provided. Generated random shell name: gqXRAJu1LD8d8VKf12/Dec/2018 Wed 23:21:37 +0100 | INFO | File containing shell code was provided: REDACTED\zip-shotgun\custom-shell.php. Content will be added to archive12/Dec/2018 Wed 23:21:37 +0100 | INFO | Getting file extension from provided shell file for reuse: php12/Dec/2018 Wed 23:21:37 +0100 | INFO | Opening provided file with shell code: REDACTED\zip-shotgun\custom-shell.php12/Dec/2018 Wed 23:21:37 +0100 | INFO | –compress flag was NOT set. Archive will be uncompressed. Files will be only stored.12/Dec/2018 Wed 23:21:37 +0100 | INFO | Writing file to the archive: gqXRAJu1LD8d8VKf.php12/Dec/2018 Wed 23:21:37 +0100 | INFO | Setting full read/write/execute permissions (chmod 777) for file: gqXRAJu1LD8d8VKf.php12/Dec/2018 Wed 23:21:37 +0100 | INFO | Writing file to the archive: ../gqXRAJu1LD8d8VKf.php12/Dec/2018 Wed 23:21:37 +0100 | INFO | Setting full read/write/execute permissions (chmod 777) for file: ../gqXRAJu1LD8d8VKf.php12/Dec/2018 Wed 23:21:37 +0100 | INFO | Writing file to the archive: ../../gqXRAJu1LD8d8VKf.php12/Dec/2018 Wed 23:21:37 +0100 | INFO | Setting full read/write/execute permissions (chmod 777) for file: ../../gqXRAJu1LD8d8VKf.php12/Dec/2018 Wed 23:21:37 +0100 | INFO | Finished. Try to access shell using gqXRAJu1LD8d8VKf.php in the URLProvide custom shell file and set shell name to save inside archive. Set directories count to 3 and use compression zip-shotgun –directories-count 3 –shell-name custom-name –shell-file-path ./custom-shell.php –compress archive.zip zip-shotgun -c 3 -n custom-name -f ./custom-shell.php –compress archive.zipShell code will be extracted from user provided file. Names inside the archive will be set to user provided name.Part of the script output 12/Dec/2018 Wed 23:25:19 +0100 | INFO | Opening output zip file: REDACTED\zip-shotgun\archive.zip12/Dec/2018 Wed 23:25:19 +0100 | INFO | File containing shell code was provided: REDACTED\zip-shotgun\custom-shell.php. Content will be added to archive12/Dec/2018 Wed 23:25:19 +0100 | INFO | Getting file extension from provided shell file for reuse: php12/Dec/2018 Wed 23:25:19 +0100 | INFO | Opening provided file with shell code: REDACTED\zip-shotgun\custom-shell.php12/Dec/2018 Wed 23:25:19 +0100 | INFO | –compress flag was set. Archive will be compressed using DEFLATE algorithm with a level of 912/Dec/2018 Wed 23:25:19 +0100 | INFO | Writing file to the archive: custom-name.php12/Dec/2018 Wed 23:25:19 +0100 | INFO | Setting full read/write/execute permissions (chmod 777) for file: custom-name.php12/Dec/2018 Wed 23:25:19 +0100 | INFO | Writing file to the archive: ../custom-name.php12/Dec/2018 Wed 23:25:19 +0100 | INFO | Setting full read/write/execute permissions (chmod 777) for file: ../custom-name.php12/Dec/2018 Wed 23:25:19 +0100 | INFO | Writing file to the archive: ../../custom-name.php12/Dec/2018 Wed 23:25:19 +0100 | INFO | Setting full read/write/execute permissions (chmod 777) for file: ../../custom-name.php12/Dec/2018 Wed 23:25:19 +0100 | INFO | Finished. Try to access shell using custom-name.php in the URLDownload Zip-Shotgun

Link: http://feedproxy.google.com/~r/PentestTools/~3/zgU6TcdSSH8/zip-shotgun-utility-script-to-test-zip.html

imaginaryC2 – Tool Which Aims To Help In The Behavioral (Network) Analysis Of Malware

author: Felix Weyne (website) (Twitter)Imaginary C2 is a python tool which aims to help in the behavioral (network) analysis of malware.Imaginary C2 hosts a HTTP server which captures HTTP requests towards selectively chosen domains/IPs. Additionally, the tool aims to make it easy to replay captured Command-and-Control responses/served payloads.By using this tool, an analyst can feed the malware consistent network responses (e.g. C&C instructions for the malware to execute). Additionally, the analyst can capture and inspect HTTP requests towards a domain/IP which is off-line at the time of the analysis.Replay packet capturesImaginary C2 provides two scripts to convert packet captures (PCAPs) or Fiddler Session Archives into request definitions which can be parsed by imaginary C2. Via these scripts the user can extract HTTP request URLs and domains, as well as HTTP responses. This way, one can quickly replay HTTP responses for a given HTTP request.Technical detailsrequirements: Imaginary C2 requires Python 2.7 and Windows.modules: Currently, Imaginary C2 contains three modules and two configuration files: Filename Function 1. imaginary_c2.py Hosts python’s simple HTTP server. Main module. 2. redirect_to_imaginary_c2.py Alters Windows’ host file and Windows’ (IP) Routing Table. 3. unpack_fiddler_archive.py & unpack_pcap.py Extracts HTTP responses from packet captures. Adds corresponding HTTP request domains and URLs to the configuration files. 4. redirect_config.txt Contains domains and IPs which needs to be redirected to localhost (to the python HTTP server). 5. requests_config.txt Contains URL path definitions with the corresponding data sources. request definitions: Each (HTTP) request defined in the request configuration consists of two parameters:Parameter 1: HTTP request URL path (a.k.a. urlType) Value Meaning fixed Define the URL path as a literal string regex Define a regex pattern to be matched on the URL path Parameter 2: HTTP response source (a.k.a. sourceType) Value Meaning data Imaginary C2 will respond with the contents of a file on disk python Imaginary C2 will run a python script. The output of the python script defines the HTTP response. Demo use case: Simulating TrickBot serversImaginary C2 can be used to simulate the hosting of TrickBot components and configuration files. Additionally, it can also be used to simulate TrickBot’s web injection servers.How it works:Upon execution, the TrickBot downloader connects to a set of hardcoded IPs to fetch a few configuration files. One of these configuration files contains the locations (IP addresses) of the TrickBot plugin servers. The Trickbot downloader downloads the plugins (modules) from these servers and decrypts them. The decrypted modules are then injected into a svchost.exe instance.One of TrickBot’s plugins is called injectdll, a plugin which is responsible for TrickBot’s webinjects. The injectdll plugin regularly fetches an updated set of webinject configurations. For each targeted (banking) website in the configuration, the address of a webfake server is defined. When a victim browses to a (banking) website which is targeted by TrickBot, his browser secretly gets redirected to the webfake server. The webfake server hosts a replica of the targeted website. This replica website usually is used in a social-engineering attack to defraud the victim.Imaginary C2 in action:The below video shows the TrickBot downloader running inside svchost.exe and connecting to imaginary C2 to download two modules. Each downloaded module gets injected into a newly spawned svchost.exe instance. The webinject module tries to steal the browser’s saved passwords and exfiltrates the stolen passwords to the TrickBot server. Upon visiting a targeted banking website, TrickBot redirects the browser to the webfake server. In the demo, the webfake server hosts the message: “Default imaginary C2 server response" (full video).Download imaginaryC2

Link: http://feedproxy.google.com/~r/PentestTools/~3/V0gucmHB1Ec/imaginaryc2-tool-which-aims-to-help-in.html

Aircrack-ng 1.5 – Complete Suite Of Tools To Assess WiFi Network Security

Aircrack-ng is a complete suite of tools to assess WiFi network security.It focuses on different areas of WiFi security:Monitoring: Packet capture and export of data to text files for further processing by third party tools.Attacking: Replay attacks, deauthentication, fake access points and others via packet injection.Testing: Checking WiFi cards and driver capabilities (capture and injection).Cracking: WEP and WPA PSK (WPA 1 and 2).All tools are command line which allows for heavy scripting. A lot of GUIs have taken advantage of this feature. It works primarily Linux but also Windows, OS X, FreeBSD, OpenBSD, NetBSD, as well as Solaris and even eComStation 2.BuildingRequirementsAutoconfAutomakeLibtoolshtoolOpenSSL development package or libgcrypt development package.Airmon-ng (Linux) requires ethtool.On windows, cygwin has to be used and it also requires w32api package.On Windows, if using clang, libiconv and libiconv-develLinux: LibNetlink 1 or 3. It can be disabled by passing –disable-libnl to configure.pkg-config (pkgconf on FreeBSD)FreeBSD, OpenBSD, NetBSD, Solaris and OS X with macports: gmakeLinux/Cygwin: make and Standard C++ Library development package (Debian: libstdc++-dev)Optional stuffIf you want SSID filtering with regular expression in airodump-ng (-essid-regex) pcre development package is required.If you want to use airolib-ng and ‘-r’ option in aircrack-ng, SQLite development package >= 3.3.17 (3.6.X version or better is recommended)If you want to use Airpcap, the ‘developer’ directory from the CD/ISO/SDK is required.In order to build besside-ng, besside-ng-crawler, easside-ng, tkiptun-ng and wesside-ng, libpcap development package is required (on Cygwin, use the Aircap SDK instead; see above)For best performance on FreeBSD (50-70% more), install gcc5 (or better) via: pkg install gcc8rfkillFor best performance on SMP machines, ensure the hwloc library and headers are installed. It is strongly recommended on high core count systems, it may give a serious speed boostCMocka for unit testingInstalling required and optional dependenciesBelow are instructions for installing the basic requirements to build aircrack-ng for a number of operating systems.Note: CMocka should not be a dependency when packaging Aircrack-ng.LinuxDebian/Ubuntusudo apt-get install build-essential autoconf automake libtool pkg-config libnl-3-dev libnl-genl-3-dev libssl-dev ethtool shtool rfkill zlib1g-dev libpcap-dev libsqlite3-dev libpcre3-dev libhwloc-dev libcmocka-devFedora/CentOS/RHELsudo yum install libtool pkgconfig sqlite-devel autoconf automake openssl-devel libpcap-devel pcre-devel rfkill libnl3-devel gcc gcc-c++ ethtool hwloc-devel libcmocka-develBSDFreeBSDpkg install pkgconf shtool libtool gcc8 automake autoconf pcre sqlite3 openssl gmake hwloc cmockaDragonflyBSDpkg install pkgconf shtool libtool gcc7 automake autoconf pcre sqlite3 libgcrypt gmake cmockaOpenBSDpkg_add pkgconf shtool libtool gcc automake autoconf pcre sqlite3 openssl gmake cmockaOSXXCode, Xcode command line tools and HomeBrew are required.brew install autoconf automake libtool openssl shtool pkg-config hwloc pcre sqlite3 libpcap cmockaWindowsCygwinCygwin requires the full path to the setup.exe utility, in order to automate the installation of the necessary packages. In addition, it requires the location of your installation, a path to the cached packages download location, and a mirror URL.An example of automatically installing all the dependencies is as follows:c:\cygwin\setup-x86.exe -qnNdO -R C:/cygwin -s http://cygwin.mirror.constant.com -l C:/cygwin/var/cache/setup -P autoconf -P automake -P bison -P gcc-core -P gcc-g++ -P mingw-runtime -P mingw-binutils -P mingw-gcc-core -P mingw-gcc-g++ -P mingw-pthreads -P mingw-w32api -P libtool -P make -P python -P gettext-devel -P gettext -P intltool -P libiconv -P pkg-config -P git -P wget -P curl -P libpcre-devel -P openssl-devel -P libsqlite3-develMSYS2pacman -Sy autoconf automake-wrapper libtool msys2-w32api-headers msys2-w32api-runtime gcc pkg-config git python openssl-devel openssl libopenssl msys2-runtime-devel gcc binutils make pcre-devel libsqlite-develCompilingTo build aircrack-ng, the Autotools build system is utilized. Autotools replaces the older method of compilation.NOTE: If utilizing a developer version, eg: one checked out from source control, you will need to run a pre-configure script. The script to use is one of the following: autoreconf -i or env NOCONFIGURE=1 ./autogen.sh.First, ./configure the project for building with the appropriate options specified for your environment:./configureTIP: If the above fails, please see above about developer source control versions.Next, compile the project (respecting if make or gmake is needed):Compilation: make Compilation on *BSD or Solaris: gmake Finally, the additional targets listed below may be of use in your environment:Execute all unit testing: make check Installing: make install Uninstall: make uninstall ./configure flagsWhen configuring, the following flags can be used and combined to adjust the suite to your choosing: with-airpcap=DIR: needed for supporting airpcap devices on windows (cygwin or msys2 only) Replace DIR above with the absolute location to the root of the extracted source code from the Airpcap CD or downloaded SDK available online. Required on Windows to build besside-ng, besside-ng-crawler, easside-ng, tkiptun-ng and wesside-ng when building experimental tools. The developer pack (Compatible with version 4.1.1 and 4.1.3) can be downloaded at https://support.riverbed.com/content/support/software/steelcentral-npm/airpcap.html with-experimental: needed to compile tkiptun-ng, easside-ng, buddy-ng, buddy-ng-crawler, airventriloquist and wesside-ng. libpcap development package is also required to compile most of the tools. If not present, not all experimental tools will be built. On Cygwin, libpcap is not present and the Airpcap SDK replaces it. See –with-airpcap option above. with-ext-scripts: needed to build airoscript-ng, versuck-ng, airgraph-ng and airdrop-ng. Note: Each script has its own dependencies. with-gcrypt: Use libgcrypt crypto library instead of the default OpenSSL. And also use internal fast sha1 implementation (borrowed from GIT) Dependency (Debian): libgcrypt20-dev with-duma: Compile with DUMA support. DUMA is a library to detect buffer overruns and under-runs. Dependencies (debian): duma disable-libnl: Set-up the project to be compiled without libnl (1 or 3). Linux option only. without-opt: Do not enable stack protector (on GCC 4.9 and above). enable-shared: Make OSdep a shared library. disable-shared: When combined with enable-static, it will statically compile Aircrack-ng. with-avx512: On x86, add support for AVX512 instructions in aircrack-ng. Only use it when the current CPU supports AVX512. with-static-simd=: Compile a single optimization in aircrack-ng binary. Useful when compiling statically and/or for space-constrained devices. Valid SIMD options: x86-sse2, x86-avx, x86-avx2, x86-avx512, ppc-altivec, ppc-power8, arm-neon, arm-asimd. Must be used with –enable-static –disable-shared. When using those 2 options, the default is to compile the generic optimization in the binary. –with-static-simd merely allows to choose another one. Examples:Configure and compiling: ./configure –with-experimentalmakeCompiling with gcrypt: ./configure –with-gcryptmakeInstalling: make install Installing (strip binaries): make install-strip Installing, with external scripts: ./configure –with-experimental –with-ext-scriptsmakemake installTesting (with sqlite, experimental and pcre) ./configure –with-experimentalmakemake checkCompiling on OS X with macports (and all options): ./configure –with-experimentalgmakeCompiling on OS X 10.10 with XCode 7.1 and Homebrew: env CC=gcc-4.9 CXX=g++-4.9 ./configuremakemake checkNOTE: Older XCode ships with a version of LLVM that does not support CPU feature detection; which causes the ./configure to fail. To work around this older LLVM, it is required that a different compile suite is used, such as GCC or a newer LLVM from Homebrew.If you wish to use OpenSSL from Homebrew, you may need to specify the location to its’ installation. To figure out where OpenSSL lives, run: brew –prefix opensslUse the output above as the DIR for –with-openssl=DIR in the ./configure line: env CC=gcc-4.9 CXX=g++-4.9 ./configure –with-openssl=DIRmakemake checkCompiling on FreeBSD with gcc8 env CC=gcc8 CXX=g++8 MAKE=gmake ./configuregmakeCompiling on Cygwin with Airpcap (assuming Airpcap devpack is unpacked in Aircrack-ng directory) cp -vfp Airpcap_Devpack/bin/x86/airpcap.dll srccp -vfp Airpcap_Devpack/bin/x86/airpcap.dll src/aircrack-osdepcp -vfp Airpcap_Devpack/bin/x86/airpcap.dll src/aircrack-cryptocp -vfp Airpcap_Devpack/bin/x86/airpcap.dll src/aircrack-utildlltool -D Airpcap_Devpack/bin/x86/airpcap.dll -d build/airpcap.dll.def -l Airpcap_Devpack/bin/x86/libairpcap.dll.aautoreconf -i./configure –with-experimental –with-airpcap=$(pwd)makeCompiling on DragonflyBSD with gcrypt using GCC 7 autoreconf -ienv CC=gcc7 CXX=g++7 MAKE=gmake ./configure –with-experimental –with-gcryptgmakeCompiling on OpenBSD (with autoconf 2.69 and automake 1.16) export AUTOCONF_VERSION=2.69export AUTOMAKE_VERSION=1.16autoreconf -ienv MAKE=gmake ./configuregmakePackagingAutomatic detection of CPU optimization is done at run time. This behavior is desirable when packaging Aircrack-ng (for a Linux or other distribution.)Also, in some cases it may be desired to provide your own flags completely and not having the suite auto-detect a number of optimizations. To do this, add the additional flag –without-opt to the ./configure line:./configure –without-optUsing precompiled binariesLinux/BSDUse your package manager to download aircrack-ngIn most cases, they have an old version.WindowsInstall the appropriate “monitor" driver for your card (standard drivers doesn’t work for capturing data).aircrack-ng suite is command line tools. So, you have to open a commandline Start menu -> Run… -> cmd.exe then use themRun the executables without any parameters to have helpDocumentationDocumentation, tutorials, … can be found on https://aircrack-ng.orgSee also manpages and the forum.For further information check the README fileDownload Aircrack-Ng

Link: http://www.kitploit.com/2018/12/aircrack-ng-15-complete-suite-of-tools.html

Faraday v3.4 – Collaborative Penetration Test and Vulnerability Management Platform

Here’s the main new features and improvements in Faraday v3.4:Services can now be tagged. With this new feature, you can now easily identify important services, geolocate them and more.New search operators OR/NOTIn a previous release we added the AND operator, now with 3.4 you can also use OR and NOT operators in the Status Report search box.This will allow you to find vulnerabilities easily with filters like this one:(severity:critical or severity:high) or name:”MS18-172”Performance improvements for big workspacesWe have been working on optimization for our API Rest endpoints to support millions of vulnerabilities in each workspace.Here is the full change log for version 3.4In GTK, check active_workspace it’s not nullAdd fbruteforce services fpluginAttachments can be added to a vulnerability through the API.Catch gaierror error on lynis pluginAdd OR and NOT with parenthesis support on status report searchInfo API now is publicWeb UI now detects Appscan pluginImprove performance on the workspace using custom queryWorkspaces can be set as active/disable in the welcome page.Change Nmap plugin, response field in VulnWeb now goes to Data field.Update code to support latest SQLAlchemy versionFix `create_vuln` fplugin bug that incorrectly reported duplicated vulnsThe client can set a custom logo to FaradayCentered checkboxes in user list pageClient or pentester can’t activate/deactivate workspacesIn GTK, dialogs now check that user_info is not FalseAdd tags in Service object (Frontend and backend API)Limit of users only takes the active onesImprove error message when the license is not validDownload Faraday v3.4

Link: http://www.kitploit.com/2018/12/faraday-v34-collaborative-penetration.html

Celerystalk – An Asynchronous Enumeration and Vulnerability Scanner

celerystalk helps you automate your network scanning/enumeration process with asynchronous jobs (aka tasks) while retaining full control of which tools you want to run.Configurable – Some common tools are in the default config, but you can add any tool you wantService Aware – Uses nmap/nessus service names rather than port numbers to decide which tools to runScalable – Designed for scanning multiple hosts, but works well for scanning one host at a timeVirtualHosts – Supports subdomain recon and virtualhost scanningJob Control – Supports canceling, pausing, and resuming of tasks, inspired by Burp scannerScreenshots Automatically takes screenshots of every url identified via brute force (gobuster) and spidering (Photon)Install/SetupSupported Operating Systems: KaliSupported Python Version: 2.xYou must install and run celerystalk as root# git clone https://github.com/sethsec/celerystalk.git# cd celerystalk/setup# ./install.sh# cd ..# ./celerystalk -hYou must install and run celerystalk as rootUsing celerystalk – The basics[CTF/HackTheBox mode] – How to scan a host by IP# nmap 10.10.10.10 -Pn -p- -sV -oX tenten.xml # Run nmap# ./celerystalk workspace create -o /htb # Create default workspace and set output dir# ./celerystalk import -f tenten.xml # Import scan # ./celerystalk db services # If you want to see what services were loaded# ./celerystalk scan # Run all enabled commands# ./celerystalk query watch (then Ctrl+c) # Watch scans as move from pending > running > complete# ./celerystalk report # Generate report# firefox /htb/celerystalkReports/Workspace-Report[Default.html] & # View report [Vulnerability Assessment Mode] – How to scan a list of in-scope hosts/networks and any subdomains that resolve to any of the in-scope IPs# nmap -iL client-inscope-list.txt -Pn -p- -sV -oX client.xml # Run nmap# ./celerystalk workspace create -o /assessments/client # Create default workspace and set output dir# ./celerystalk import -f client.xml -S scope.txt # Import scan and scope files# ./celerystalk subdomains -d client.com,client.net # Find subdomains and determine if in scope# ./celerystalk scan # Run all enabled commands# ./celerystalk query watch (then Ctrl+c) # Wait for scans to finish# ./celerystalk report # Generate report# firefox /celerystalkReports/Workspace-Report[Default].html &# View report [URL Mode] – How to scan a a URL (Use this mode to scan sub-directories found during first wave of scans).# ./celerystalk workspace create -o /assessments/client # Create default workspace and set output dir# ./celerystalk scan -u http://10.10.10.10/secret_folder/ # Run all enabled commands# ./celerystalk query watch (then Ctrl+c) # Wait for scans to finish# ./celerystalk report # Generate report# firefox <path>/celerystalkReports/Workspace-Report[Default].html &# View report Using celerystalk – Some more detail Configure which tools you’d like celerystalk to execute: The install script drops a config.ini file in the celerystalk folder. The config.ini script is broken up into three sections: Service Mapping – The first section normalizes Nmap & Nessus service names for celerystalk (this idea was created by @codingo_ in Reconnoitre AFAIK). [nmap-service-names]http = http,http-alt,http-proxy,www,http?https = ssl/http,https,ssl/http-alt,ssl/http?ftp = ftp,ftp?mysql = mysqldns = dns,domain,domainDomain Recon Tools – The second section defines the tools you’d like to use for subdomain discovery (an optional feature): [domain-recon]amass : /opt/amass/amass -d [DOMAIN]sublist3r : python /opt/Sublist3r/sublist3r.py -d [DOMAIN]Service Configuration – The rest of the confi.ini sections define which commands you want celerystalk to run for each identified service (i.e., http, https, ssh). Disable any command by commenting it out with a ; or a #.Add your own commands using [TARGET],[PORT], and [OUTPUT] placeholders.Here is an example: [http]whatweb : whatweb http://[TARGET]:[PORT] -a3 –colour=never > [OUTPUT].txtcewl : cewl http://[TARGET]:[PORT]/ -m 6 -w [OUTPUT].txtcurl_robots : curl http://[TARGET]:[PORT]/robots.txt –user-agent ‘Googlebot/2.1 (+http://www.google.com/bot.html)’ –connect-timeout 30 –max-time 180 > [OUTPUT].txtnmap_http_vuln : nmap -sC -sV -Pn -v -p [PORT] –script=http-vuln* [TARGET] -d -oN [OUTPUT].txt -oX [OUTPUT].xml –host-timeout 120m –script-timeout 20mnikto : nikto -h http://[TARGET] -p [PORT] &> [OUTPUT].txtgobuster-common : gobuster -u http://[TARGET]:[PORT]/ -k -w /usr/share/seclists/Discovery/Web-Content/common.txt -s ‘200,204,301,302,307,403,500’ -e -n -q > [OUTPUT].txtphoton : python /opt/Photon/photon.py -u http://[TARGET]:[PORT] -o [OUTPUT];gobuster_2.3-medium : gobuster -u http://[TARGET]:[PORT]/ -k -w /usr/share/wordlists/dirbuster/directory-list-lowercase-2.3-medium.txt -s ‘200,204,301,307,403,500’ -e -n -q > [OUTPUT].txt Run Nmap or Nessus: Nmap: Run nmap against your target(s). Required: enable version detection (-sV) and output to XML (-oX filename.xml). All other nmap options are up to you. Here are some examples: nmap target(s) -Pn -p- -sV -oX filename.xml nmap -iL target_list.txt -Pn -sV -oX filename.xmlNessus: Run nessus against your target(s) and export results as a .nessus file Create worksapce: Option Description no options Prints current workspace create Creates new workspace -w Define new workspace name -o Define output directory assigned to workspace Create default workspace ./celerystalk workspace create -o /assessments/client Create named workspace ./celerystalk workspace create -o /assessments/client -w client Switch to another worksapce ./celerystalk workspace client Import Data: Import data into celerystalk Option Description -f scan.xml Nmap/Nessus xmlAdds all IP addresses from this file to hosts table and marks them all in scope to be scanned.Adds all ports and service types to services table. -S scope.txt Scope fileShow file differences that haven’t been staged -D subdomains.txt (sub)Domains filecelerystalk determines whether each subdomain is in scope by resolving the IP and looking for IP in the DB. If there is a match, the domain is marked as in scope and will be scanned. Import Nmap XML file: ./celerystalk import -f /assessments/nmap.xml Import Nessus file: ./celerystalk import -f /assessments/scan.nessus Import list of Domains: ./celerystalk import -D <file>Import list of IPs/Ranges: ./celerystalk import -S <file>Specify workspace: ./celerystalk import -f <file> Import multiple files: ./celerystalk import -f nmap.xml -S scope.txt -D domains.txt Find Subdomains (Optional): celerystalk will perform subdomain recon using the tools specified in the config.ini. Option Description -d domain1,domain2,etc Run Amass, Sublist3r, etc. and store domains in DBAfter running your subdomain recon tools celerystalk determines whether each subdomain is in scope by resolving the IP and looking for IP in the DB. If there is a match, the domain is marked as in scope and will be scanned. Find subdomains: celerystalk subdomains -d domain1.com,domain2.com Launch Scan: I recommend using the import command first and running scan with no options, however you do have the option to do it all at once (import and scan) by using the flags below. celerystalk will submit tasks to celery which asynchronously executes them and logs output to your output directory. Option Description no options Scan all in scope hostsReads DB and scans every in scope IP and subdomain.Launches all enabled tools for IPs, but only http/http specific tools against virtualhosts -t ip,vhost,cidr Scan specific target(s) from DB or scan fileScan a subset of the in scope IPs and/or subdomains. -s SimulationSends all of the tasks to celery, but all commands are executed with a # before them rendering them inert. Use these only if you want to skip the import phase and import/scan all at once -f scan.xml Import and process Nmap/Nessus xml before scanAdds all IP addresses from this file to hosts table and marks them all in scope to be scanned.Adds all ports and service types to services table. -S scope.txt Import and process scope file before scanShow file differences that haven’t been staged. -D subdomains.txt Import and process (sub)domains file before scan celerystalk determines whether each subdomain is in scope by resolving the IP and looking for IP in the DB. If there is a match, the domain is marked as in scope and will be scanned. -d domain1,domain2,etc Find Subdomains and scan in scope hostsAfter running your subdomain recon tools celerystalk determines whether each subdomain is in scope by resolving the IP and looking for IP in the DB. If there is a match, the domain is marked as in scope and will be scanned. Scan imported hosts/subdomains Scan all in scope hosts: ./celerystalk scan Scan subset of DB hosts: ./celerystalk scan -t 10.0.0.1,10.0.0.3 ./celerystalk scan -t 10.0.0.100-200 ./celerystalk scan -t 10.0.0.0/24 ./celerystalk scan -t sub.domain.comSimulation mode: ./celerystalk scan -sImport and Scan Start from Nmap XML file: ./celerystalk scan -f /pentest/nmap.xml -o /pentestStart from Nessus file: ./celerystalk scan -f /pentest/scan.nessus -o /pentestScan all in scope vhosts: ./celerystalk scan -f <file> -o /pentest -d domain1.com,domain2.comScan subset hosts in XML: ./celerystalk scan -f <file> -o /pentest -t 10.0.0.1,10.0.0.3 ./celerystalk scan -f <file> -o /pentest -t 10.0.0.100-200 ./celerystalk scan -f <file> -o /pentest -t 10.0.0.0/24Simulation mode: ./celerystalk scan -f <file> -o /pentest -s Rescan: Use this command to rescan an already scanned host. Option Description no option For each in scope host in the DB, celerystalk will ask if if you want to rescan it -t ip,vhost,cidr Scan a subset of the in scope IPs and/or subdomains. Rescan all hosts: ./celerystalk rescanRescan some hosts ./celerystalk rescan-t 1.2.3.4,sub.domain.com Simulation mode: ./celerystalk rescan -s Query Status: Asynchronously check the status of the tasks queue as frequently as you like. The watch mode actually executes the linux watch command so you don’t fill up your entire terminal buffer. Option Description no options Shows all tasks in the defualt workspace watch Sends command to the unix watch command which will let you get an updated status every 2 seconds brief Limit of 5 results per status (pending/running/completed/cancelled/paused) summary Shows only a banner with numbers and not the tasks themselves Query Tasks: ./celerystalk query ./celerystalk query watch ./celerystalk query brief ./celerystalk query summary ./celerystalk query summary watch Cancel/Pause/Resume Tasks: Cancel/Pause/Resume any task(s) that are currently running or in the queue. Option Description cancel Canceling a running task will send a kill -TERMCanceling a queued task* will make celery ignore it (uses celery’s revoke).Canceling all tasks* will kill running tasks and revoke all queued tasks. pause Pausing a single task uses kill -STOP to suspend the process.Pausing all tasks* attemps to kill -STOP all running tasks, but it is a little wonky and you mind need to run it a few times. It is possible a job completed before it was able to be paused, which means you will have a worker that is still accepting new jobs. resume Resuming tasks* sends a kill -CONT which allows the process to start up again where it left off. Cancel/Pause/Resume Tasks: ./celerystalk <verb> 5,6,10-20 #Cancel/Pause/Resume tasks 5, 6, and 10-20 from current workspace ./celerystalk <verb> all #Cancel/Pause/Resume all tasks from current workspaces Run Report: Run a report which combines all of the tool output into an html file and a txt file. Run this as often as you like. Each time you run the report it overwrites the previous report. Create Report: ./celerystalk report #Create a report for all scanneed hosts in current workspaceScreenshot: Access the DB: List the workspaces, hosts, services, or paths stored in the celerystalk database Option Description workspaces Show all known workspaces and the output directory associated with each workspace services Show all known open ports and service types by IP hosts Show all hosts (IP addresses and subdomains/vhosts) and whether they are in scope and whether they have been submitted for scanning paths Show all paths that have been identified by vhost -w workspace Specify a non-default workspace Show workspaces: ./celeryststalk db workspacesShow services: ./celeryststalk db services Show hosts: ./celeryststalk db hostsShow paths: ./celeryststalk db paths Export DB: Export each table of the DB to a csv file Option Description no options Export the services, hosts, and paths table from the default database -w workspace Specify a non-default workspace Export current DB: ./celerystalk db exportExport another DB: ./celerystalk db export -w testUsageUsage: celerystalk workspace create -o <output_dir> [-w workspace_name] celerystalk workspace [<workspace_name>] celerystalk import [-f <nmap_file>] [-S scope_file] [-D subdomains_file] [-u <url>] celerystalk subdomains -d <domains> [-s] celerystalk scan [-f <nmap_file>] [-t <targets>] [-d <domains>] [-S scope_file] [-D subdomains_file] [-s] celerystalk scan -u <url> [-s] celerystalk rescan [-t <targets>] [-s] celerystalk query ([full] | [summary] | [brief]) [watch] celerystalk query [watch] ([full] | [summary] | [brief]) celerystalk report celerystalk cancel ([all]|[<task_ids>]) celerystalk pause ([all]|[<task_ids>]) celerystalk resume ([all]|[<task_ids>]) celerystalk db ([workspaces] | [services] | [hosts] | [vhosts] | [paths]) celerystalk db export celerystalk shutdown celerystalk interactive celerystalk (help | -h | –help)Options: -h –help Show this screen -v –version Show version -f <nmap_file> Nmap xml import file -o <output_dir> Output directory -S <scope_file> Scope import file -D <subdomains_file> Subdomains import file -t <targets> Target(s): IP, IP Range, CIDR -u <url> URL to parse and scan with all configured tools -w <workspace> Workspace -d –domains Domains to scan for vhosts -s –simulation Simulation mode. Submit tasks comment out all commandsExamples: Workspace Create default workspace celerystalk workspace create -o /assessments/client Create named workspace celerystalk workspace create -o /assessments/client -w client Switch to another worksapce celerystalk workspace client2 Import Import Nmap XML file: celerystalk import -f /assessments/nmap.xml Import Nessus file: celerystalk import -f /assessments/scan.nessus Import list of Domains: celerystalk import -D <file> Import list of IPs/Ranges: celerystalk import -S <file> Import multiple files: celerystalk import -f nmap.xml -S scope.txt -D domains.txt Subdomain Recon Find subdomains: celerystalk subdomains -d domain1.com,domain2.com Scan Scan all in scope hosts: celerystalk scan Scan subset of DB hosts: celerystalk scan -t 10.0.0.1,10.0.0.3 celerystalk scan -t 10.0.0.100-200 celerystalk scan -t 10.0.0.0/24 celerystalk scan -t sub.domain.com Simulation mode: celerystalk scan -s Import and Scan Start from Nmap XML file: celerystalk scan -f /pentest/nmap.xml Start from Nessus file: celerystalk scan -f /pentest/scan.nessus Scan subset hosts in XML: celerystalk scan -f <file> -t 10.0.0.1,10.0.0.3 celerystalk scan -f <file> -t 10.0.0.100-200 celerystalk scan -f <file> -t 10.0.0.0/24 celerystalk scan -f <file> -t sub.domain.com Simulation mode: celerystalk scan -f <file> -s Rescan Rescan all hosts: celerystalk rescan Rescan some hosts celerystalk rescan-t 1.2.3.4,sub.domain.com Simulation mode: celerystalk rescan -s Query Mode All tasks: celerystalk query Update status every 2s: celerystalk query watch Show only 5 tasks per mode: celerystalk query brief Show stats only celerystalk query summary Show stats every 2s: celerystalk query summary watch Job Control (cancel/pause/resume) Specific tasks: celerystalk cancel 5,6,10-20 celerystalk pause 5,6,10-20 celerystalk resume 5,6,10-20 All tasks current worspace: celerystalk cancel all celerystalk pause all celerystalk resume all Access the DB Show workspaces: celeryststalk db workspaces Show services: celeryststalk db services Show hosts: celeryststalk db hosts Show vhosts only celeryststalk db vhosts Show paths: celeryststalk db paths Export DB Export current DB: celerystalk db exportCreditThis project was inspired by many great tools:https://github.com/codingo/Reconnoitre by @codingo_https://github.com/frizb/Vanquish by @frizbhttps://github.com/leebaird/discover by @discoverscriptshttps://github.com/1N3/Sn1perhttps://github.com/SrFlipFlop/Network-Security-Analysis by @SrFlipFlopThanks to @offensivesecurity and @hackthebox_eu for their lab networksAlso, thanks to:@decidedlygray for pointing me towards celery, helping me solve python problems that were over my head, and for the extensive beta testing@kerpanic for inspiring me to dust off an old project and turn it into celerystalkMy TUV OpenSky team and my IthacaSec hackers for testing this out and submitting bugs and featuresDownload Celerystalk

Link: http://feedproxy.google.com/~r/PentestTools/~3/9zxM11uFyz8/celerystalk-asynchronous-enumeration.html

DevAudit – Open-source, Cross-Platform, Multi-Purpose Security Auditing Tool

DevAudit is an open-source, cross-platform, multi-purpose security auditing tool targeted at developers and teams adopting DevOps and DevSecOps that detects security vulnerabilities at multiple levels of the solution stack. DevAudit provides a wide array of auditing capabilities that automate security practices and implementation of security auditing in the software development life-cycle. DevAudit can scan your operating system and application package dependencies, application and application server configurations, and application code, for potential vulnerabilities based on data aggregated by providers like OSS Index and Vulners from a wide array of sources and data feeds such as the National Vulnerability Database (NVD) CVE data feed, the Debian Security Advisories data feed, Drupal Security Advisories, and many others.DevAudit helps developers address at least 4 of the OWASP Top 10 risks to web application development:A9 Using Components with Known VulnerabilitiesA5 Security MisconfigurationA6 Sensitive Data ExposureA2 Broken Authentication and Session Managementas well as risks classified by MITRE in the CWE dictionary such as CWE-2 Environment and CWE-200 Information Disclosure As development progresses and its capabilities mature, DevAudit will be able to address the other risks on the OWASP Top 10 and CWE lists like Injection and XSS. With the focus on web and cloud and distributed multi-user applications, software development today is increasingly a complex affair with security issues and potential vulnerabilities arising at all levels of the stack developers rely on to deliver applications. The goal of DevAudit is to provide a platform for automating implementation of development security reviews and best practices at all levels of the solution stack from library package dependencies to application and server configuration to source code.Features Cross-platform with a Docker image also available. DevAudit runs on Windows and Linux with *BSD and Mac and ARM Linux support planned. Only an up-to-date version of .NET or Mono is required to run DevAudit. A DevAudit Docker image can also be pulled from Docker Hub and run without the need to install Mono. CLI interface. DevAudit has a CLI interface with an option for non-interactive output and can be easily integrated into CI build pipelines or as post-build command-line tasks in developer IDEs. Work on integration of the core audit library into IDE GUIs has already begun with the Audit.Net Visual Studio extension. Continuously updated vulnerabilties data. DevAudit uses backend data providers like OSS Index and Vulners which provide continuously updated vulnerabilities data compiled from a wide range of security data feeds and sources such as the NVD CVE feeds, Drupal Security Advisories, and so on. Support for additional vulnerability and package data providers like vFeed and Libraries.io will be added. Audit operating system and development package dependencies. DevAudit audits Windows applications and packages installed via Windows MSI, Chocolatey, and OneGet, as well as Debian, Ubuntu, and CentOS Linux packages installed via Dpkg, RPM and YUM, for vulnerabilities reported for specific versions of the applications and packages. For development package dependencies and libraries DevAudit audits NuGet v2 dependencies for .NET, Yarn/NPM and Bower dependencies for nodejs, and Composer package dependencies for PHP. Support for other package managers for different languages is added regularly. Audit application server configurations. DevAudit audits the server version and the server configuration for the OpenSSH sshd, Apache httpd, MySQL/MariaDB, PostgreSQL, and Nginx servers with many more coming. Configuration auditing is based on the Alpheus library and is done using full syntactic analysis of the server configuration files. Server configuration rules are stored in YAML text files and can be customized to the needs of developers. Support for many more servers and applications and types of analysis like database auditing is added regularly. Audit application configurations. DevAudit audits Microsoft ASP.NET applications and detects vulnerabilities present in the application configuration. Application configuration rules are stored in YAML text files and can be customized to the needs of developers. Application configuration auditing for applications like Drupal and WordPress and DNN CMS is coming. Audit application code by static analysis. DevAudit currently supports static analysis of .NET CIL bytecode. Analyzers reside in external script files and can be fully customized based on the needs of the developer. Support for C# source code analysis via Roslyn, PHP7 source code and many more languages and external static code analysis tools is coming. Remote agentless auditing. DevAudit can connect to remote hosts via SSH with identical auditing features available in remote environments as in local environments. Only a valid SSH login is required to audit remote hosts and DevAudit running on Windows can connect to and audit Linux hosts over SSH. On Windows DevAudit can also remotely connect to and audit other Windows machines using WinRM. Agentless Docker container auditing. DevAudit can audit running Docker containers from the Docker host with identical features available in container environments as in local environments. GitHub repository auditing. DevAudit can connect directly to a project repository hosted on GitHub and perform package source and application configuration auditing. PowerShell support. DevAudit can also be run inside the PowerShell system administration environment as cmdlets. Work on PowerShell support is paused at present but will resume in the near future with support for cross-platform Powershell both on Windows and Linux. RequirementsDevAudit is a .NET 4.6 application. To install locally on your machine you will need either the Microsoft .NET Framework 4.6 runtime on Windows, or Mono 4.4+ on Linux. .NET 4.6 should be already installed on most recent versions of Windows, if not then it is available as a Windows feature that can be turned on or installed from the Programs and Features control panel applet on consumer Windows, or from the Add Roles and Features option in Server Manager on server versions of Windows. For older versions of Windows, the .NET 4.6 installer from Microsoft can be found here.On Linux the minimum version of Mono supported is 4.4. Although DevAudit runs on Mono 4 (with one known issue) it’s recommended that Mono 5 be installed. Mono 5 brings many improvements to the build and runtime components of Mono that benefit DevAudit.The existing Mono packages provided by your distro are probably not Mono 5 as yet, so you will have to install Mono packages manually to be able to use Mono 5. Installation instructions for the most recent packages provided by the Mono project for several major Linux distros are here. It is recommended you have the mono-devel package installed as this will reduce the chances of missing assemblies.Alternatively on Linux you can use the DevAudit Docker image if you do not wish to install Mono and already have Docker installed on your machine.InstallationDevAudit can be installed by the following methods:Building from source.Using a binary release archive file downloaded from Github for Windows or Linux.Using the release MSI installer downloaded from Github for Windows.Using the Chocolatey package manager on Windows.Pulling the ossindex/devaudit image from Docker Hub on Linux.Building from source on LinuxPre-requisites: Mono 4.4+ (Mono 5 recommended) and the mono-devel package which provides the compiler and other tools needed for building Mono apps. Your distro should have packages for at least Mono version 4.4 and above, otherwise manual installation instructions for the most recent packages provided by the Mono project for several major Linux distros are here Clone the DevAudit repository from https://github.com/OSSIndex/DevAudit.git Run the build.sh script in the root DevAudit directory. DevAudit should compile without any errors. Run ./devaudit –help and you should see the DevAudit version and help screen printed. Note that NuGet on Linux may occasionally exit with Error: NameResolutionFailure which seems to be a transient problem contacting the servers that contain the NuGet packages. You should just run ./build.sh again until the build completes normally.Building from source on WindowsPre-requisites: You must have one of: A .NET Framework 4.6 SDK or developer pack.Visual Studio 2015.Clone the DevAudit repository from https://github.com/OSSIndex/DevAudit.git From a visual Studio 2015 or ,NETRun the build.cmd script in the root DevAudit directory. DevAudit should compile without any errors. Run ./devaudit –help and you should see the DevAudit version and help screen printed. Installing from the release archive files on Windows on LinuxPre-requisites: You must have Mono 4.4+ on Linux or .NET 4.6 on Windows. Download the latest release archive file for Windows or Linux from the project releases page. Unpack this file to a directory. From the directory where you unpacked the release archive run devaudit –help on Windows or ./devaudit –help on Linux. You should see the version and help screen printed. (Optional) Add the DevAudit installation directory to your PATH environment variable Installing using the MSI Installer on WindowsThe MSI installer for a release can be found on the Github releases page.Click on the releases link near the top of the page.Identify the release you would like to install.A “DevAudit.exe" link should be visible for each release that has a pre-built installer.Download the file and execute the installer. You will be guided through a simple installation.Open a new command prompt or PowerShell window in order to have DevAudit in path.Run DevAudit.Installing using Chocolatey on WindowsDevAudit is also available on Chocolatey.Install Chocolatey.Open an admin console or PowerShell window.Type choco install devauditRun DevAudit.Installing using Docker on LinuxPull the Devaudit image from Docker Hub: docker pull ossindex/devaudit. The image tagged ossindex/devaudit:latest (which is the default image that is downloaded) is built from the most recent release while ossindex/devaudit:unstable is built on the master branch of the source code and contains the newest additions albeit with less testing.ConceptsAudit TargetRepresents a logical group of auditing functions. DevAudit currently supports the following audit targets:Package Source. A package source manages application and library dependencies using a package manager. Package managers install, remove or update applications and library dependencies for an operating system like Debian Linux, or for a development language or framework like .NET or nodejs. Examples of package sources are dpkg, yum, Chocolatey, Composer, and Bower. DevAudit audits the names and versions of installed packages against vulnerabilities reported for specific versions of those packages.Application. An application like Drupal or a custom application built using a framework like ASP.NET. DevAudit audits applications and application modules and plugins against vulnerabilities reported for specific versions of application binaries and modules and plugins. DevAudit can also audit application configurations for known vulnerabilities, and perform static analysis on application code looking for known weaknesses.Application Server. Application servers provide continuously running services or daemons like a web or database server for other applications to use, or for users to access services like authentication. Examples of application servers are the OpenSSH sshd and Apache httpd servers. DevAudit can audit application server binaries, modules and plugins against vulnerabilities reported for specific versions as well as audit server configurations for known server configuration vulnerabilities and weaknesses.Audit EnvironmentRepresents a logical environment where audits against audit targets are executed. Audit environments abstract the I/O and command executions required for an audit and allow identical functions to be performed against audit targets on whatever physical or network location the target’s files and executables are located. The follwing environments are currently supported :Local. This is the default audit environment where audits are executed on the local machine.SSH. Audits are executed on a remote host connected over SSH. It is not necessary to have DevAudit installed on the remote host.WinRM. Audits are executed on a remote Windows host connected over WinRM. It is not necessary to have DevAudit installed on the remote host.Docker. Audits are executed on a running Docker container. It is not necessary to have DevAudit installed on the container image.GitHub. Audits are executed on a GitHub project repository’s file-system directly. It is not necessary to checkout or download the project locally to perform the audit.Audit OptionsThese are different options that can be enabled for the audit. You can specify options that apply to the DevAudit program for example, to run in non-interactive mode, as well as options that apply to the target e.g if you set the AppDevMode option for auditing ASP.NET applications to true then certain audit rules will not be enabled.Basic UsageThe CLI is the primary interface to the DevAudit program and is suitable both for interactive use and for non-interactive use in scheduled tasks, shell scripts, CI build pipelines and post-build tasks in developer IDEs. The basic DevAudit CLI syntax is:devaudit TARGET [ENVIRONMENT] | [OPTIONS]where TARGET specifies the audit target ENVIRONMENT specifies the audit environment and OPTIONS specifies the options for the audit target and environment. There are 2 ways to specify options: program options and general audit options that apply to more than one target can be specified directly on the command-line as parameters . Target-specific options can be specified with the -o options using the format: -o OPTION1=VALUE1,OPTION2=VALUE2,…. with commas delimiting each option key-value pair.If you are piping or redirecting the program output to a file then you should always use the -n –non-interactive option to disable any interactive user interface features and animations.When specifying file paths, an @ prefix before a path indicates to DevAudit that this path is relative to the root directory of the audit target e.g if you specify: -r c:\myproject -b @bin\Debug\app2.exe DevAudit considers the path to the binary file as c:\myproject\bin\Debug\app2.exe.Audit TargetsPackage Sources msi Do a package audit of the Windows Installer MSI package source on Windows machines. choco Do a package audit of packages installed by the Choco package manager. oneget Do a package audit of the system OneGet package source on Windows. nuget Do a package audit of a NuGet v2 package source. You must specify the location of the NuGet packages.config file you wish to audit using the -f or –file option otherwise the current directory will be searched for this file. bower Do a package audit of a Bower package source. You must specify the location of the Bower packages.json file you wish to audit using the -f or –file option otherwise the current directory will be searched for this file. composer Do a package audit of a Composer package source. You must specify the location of the Composer composer.json file you wish to audit using the -f or –file option otherwise the current directory will be searched for this file. dpkg Do a package audit of the system dpkg package source on Debian Linux and derivatives. rpm Do a package audit of the system RPM package source on RedHat Linux and derivatives. yum Do a package audit of the system Yum package source on RedHat Linux and derivatives. For every package source the following general audit options can be used: -f –file Specify the location of the package manager configuration file if needed. The NuGet, Bower and Composer package sources require this option. –list-packages Only list the packages in the package source scanned by DevAudit. –list-artifacts Only list the artifacts found on OSS Index for packages scanned by DevAudit. Package sources tagged [Experimental] are only available in the master branch of the source code and may have limited back-end OSS Index support. However you can always list the packages scanned and artifacts available on OSS Index using the list-packages and list-artifacts options.Applications aspnet Do an application audit on a ASP.NET application. The relevant options are: -r –root-directory Specify the root directory of the application. This is just the top-level application directory that contains files like Global.asax and Web.config.-b –application-binary Specify the application binary. The is the .NET assembly that contains the application’s .NET bytecode. This file is usually a .DLL and located in the bin sub-folder of the ASP.NET application root directory.-c –configuration-file or -o AppConfig=configuration-file Specifies the ASP.NET application configuration file. This file is usually named Web.config and located in the application root directory. You can override the default @Web.config value with this option.-o AppDevMode=enabled Specifies that application development mode should be enabled for the audit. This mode can be used when auditing an application that is under development. Certain configuration rules that are tagged as disabled for AppDevMode (e.g running the application in ASP.NET debug mode) will not be enabled during the audit. netfx Do an application audit on a .NET application. The relevant options are: -r –root-directory Specify the root directory of the application. This is just the top-level application directory that contains files like App.config.-b –application-binary Specify the application binary. The is the .NET assembly that contains the application’s .NET bytecode. This file is usually a .DLL and located in the bin sub-folder of the ASP.NET application root directory.-c –configuration-file or -o AppConfig=configuration-file Specifies the .NET application configuration file. This file is usually named App.config and located in the application root directory. You can override the default @App.config value with this option.-o GendarmeRules=RuleLibrary Specifies that the Gendarme static analyzer should enabled for the audit with rules from the specified rules library used. For example: devaudit netfx -r /home/allisterb/vbot-debian/vbot.core -b @bin/Debug/vbot.core.dll –skip-packages-audit -o GendarmeRules=Gendarme.Rules.Naming will run the Gendarme static analyzer on the vbot.core.dll assembly using rules from Gendarme.Rules.Naming library. The complete list of rules libraries is (taken from the Gendarme wiki):Gendarme.Rules.BadPracticeGendarme.Rules.ConcurrencyGendarme.Rules.CorrectnessGendarme.Rules.DesignGendarme.Rules.Design.GenericGendarme.Rules.Design.LinqGendarme.Rules.ExceptionsGendarme.Rules.GendarmeGendarme.Rules.GlobalizationGendarme.Rules.InteroperabilityGendarme.Rules.Interoperability.ComGendarme.Rules.MaintainabilityGendarme.Rules.NUnitGendarme.Rules.NamingGendarme.Rules.PerformanceGendarme.Rules.PortabilityGendarme.Rules.SecurityGendarme.Rules.Security.CasGendarme.Rules.SerializationGendarme.Rules.SmellsGendarme.Rules.Ui drupal7 Do an application audit on a Drupal 7 application. -r –root-directory Specify the root directory of the application. This is just the top-level directory of your Drupal 7 install. drupal8 Do an application audit on a Drupal 8 application. -r –root-directory Specify the root directory of the application. This is just the top-level directory of your Drupal 8 install.All applications also support the following common options for auditing the application modules or plugins: –list-packages Only list the application plugins or modules scanned by DevAudit. –list-artifacts Only list the artifacts found on OSS Index for application plugins and modules scanned by DevAudit. –skip-packages-audit Only do an appplication configuration or code analysis audit and skip the packages audit. Application Servers sshd Do an application server audit on an OpenSSH sshd-compatible server. httpd Do an application server audit on an Apache httpd-compatible server. mysql Do an application server audit on a MySQL-compatible server (like MariaDB or Oracle MySQL.) nginx Do an application server audit on a Nginx server. pgsql Do an application server audit on a PostgreSQL server. This is an example command line for an application server audit: ./devaudit httpd -i httpd-2.2 -r /usr/local/apache2/ -c @conf/httpd.conf -b @bin/httpd which audits an Apache Httpd server running on a Docker container named httpd-2.2.The following are audit options common to all application servers:-r –root-directory Specifies the root directory of the server. This is just the top-level of your server filesystem and defaults to / unless you want a different server root.-c –configuration-file Specifies the server configuration file. e.g in the above audit the Apache configuration file is located at /usr/local/apache2/conf/httpd.conf. If you don’t specify the configuration file DevAudit will attempt to auto-detect the configuration file for the server selected.-b –application-binary Specifies the server binary. e.g in the above audit the Apache binary is located at /usr/local/apache2/bin/httpd. If you don’t specify the binary path DevAudit will attempt to auto-detect the server binary for the server selected.Application servers also support the following common options for auditing the server modules or plugins: –list-packages Only list the application plugins or modules scanned by DevAudit. –list-artifacts Only list the artifacts found on OSS Index for application plugins and modules scanned by DevAudit. –skip-packages-audit Only do a server configuration audit and skip the packages audit. EnvironmentsThere are currently 5 audit environment supported: local, remote hosts over SSH, remote hosts over WinRM, Docker containers, and GitHub. Local environments are used by default when no other environment options are specified.SSHThe SSH environment allows audits to be performed on any remote hosts accessible over SSH without requiring DevAudit to be installed on the remote host. SSH environments are cross-platform: you can connect to a Linux remote host from a Windows machine running DevAudit. An SSH environment is created by the following options:-s SERVER [–ssh-port PORT] -u USER [-k KEYFILE] [-p | –password-text PASSWORD]-s SERVER Specifies the remote host or IP to connect to via SSH.-u USER Specifies the user to login to the server with.–ssh-port PORT Specifies the port on the remote host to connect to. The default is 22.-k KEYFILE Specifies the OpenSSH compatible private key file to use to connect to the remote server. Currently only RSA or DSA keys in files in the PEM format are supported.-p Provide a prompt with local echo disabled for interactive entry of the server password or key file passphrase.–password-text PASSWORD Specify the user password or key file passphrase as plaintext on the command-line. Note that on Linux when your password contains special characters you should use enclose the text on the command-line using single-quotes like ‘MyPa

Link: http://www.kitploit.com/2018/12/devaudit-open-source-cross-platform.html