DevAudit – Open-source, Cross-Platform, Multi-Purpose Security Auditing Tool

DevAudit is an open-source, cross-platform, multi-purpose security auditing tool targeted at developers and teams adopting DevOps and DevSecOps that detects security vulnerabilities at multiple levels of the solution stack. DevAudit provides a wide array of auditing capabilities that automate security practices and implementation of security auditing in the software development life-cycle. DevAudit can scan your operating system and application package dependencies, application and application server configurations, and application code, for potential vulnerabilities based on data aggregated by providers like OSS Index and Vulners from a wide array of sources and data feeds such as the National Vulnerability Database (NVD) CVE data feed, the Debian Security Advisories data feed, Drupal Security Advisories, and many others.DevAudit helps developers address at least 4 of the OWASP Top 10 risks to web application development:A9 Using Components with Known VulnerabilitiesA5 Security MisconfigurationA6 Sensitive Data ExposureA2 Broken Authentication and Session Managementas well as risks classified by MITRE in the CWE dictionary such as CWE-2 Environment and CWE-200 Information Disclosure As development progresses and its capabilities mature, DevAudit will be able to address the other risks on the OWASP Top 10 and CWE lists like Injection and XSS. With the focus on web and cloud and distributed multi-user applications, software development today is increasingly a complex affair with security issues and potential vulnerabilities arising at all levels of the stack developers rely on to deliver applications. The goal of DevAudit is to provide a platform for automating implementation of development security reviews and best practices at all levels of the solution stack from library package dependencies to application and server configuration to source code.Features Cross-platform with a Docker image also available. DevAudit runs on Windows and Linux with *BSD and Mac and ARM Linux support planned. Only an up-to-date version of .NET or Mono is required to run DevAudit. A DevAudit Docker image can also be pulled from Docker Hub and run without the need to install Mono. CLI interface. DevAudit has a CLI interface with an option for non-interactive output and can be easily integrated into CI build pipelines or as post-build command-line tasks in developer IDEs. Work on integration of the core audit library into IDE GUIs has already begun with the Audit.Net Visual Studio extension. Continuously updated vulnerabilties data. DevAudit uses backend data providers like OSS Index and Vulners which provide continuously updated vulnerabilities data compiled from a wide range of security data feeds and sources such as the NVD CVE feeds, Drupal Security Advisories, and so on. Support for additional vulnerability and package data providers like vFeed and Libraries.io will be added. Audit operating system and development package dependencies. DevAudit audits Windows applications and packages installed via Windows MSI, Chocolatey, and OneGet, as well as Debian, Ubuntu, and CentOS Linux packages installed via Dpkg, RPM and YUM, for vulnerabilities reported for specific versions of the applications and packages. For development package dependencies and libraries DevAudit audits NuGet v2 dependencies for .NET, Yarn/NPM and Bower dependencies for nodejs, and Composer package dependencies for PHP. Support for other package managers for different languages is added regularly. Audit application server configurations. DevAudit audits the server version and the server configuration for the OpenSSH sshd, Apache httpd, MySQL/MariaDB, PostgreSQL, and Nginx servers with many more coming. Configuration auditing is based on the Alpheus library and is done using full syntactic analysis of the server configuration files. Server configuration rules are stored in YAML text files and can be customized to the needs of developers. Support for many more servers and applications and types of analysis like database auditing is added regularly. Audit application configurations. DevAudit audits Microsoft ASP.NET applications and detects vulnerabilities present in the application configuration. Application configuration rules are stored in YAML text files and can be customized to the needs of developers. Application configuration auditing for applications like Drupal and WordPress and DNN CMS is coming. Audit application code by static analysis. DevAudit currently supports static analysis of .NET CIL bytecode. Analyzers reside in external script files and can be fully customized based on the needs of the developer. Support for C# source code analysis via Roslyn, PHP7 source code and many more languages and external static code analysis tools is coming. Remote agentless auditing. DevAudit can connect to remote hosts via SSH with identical auditing features available in remote environments as in local environments. Only a valid SSH login is required to audit remote hosts and DevAudit running on Windows can connect to and audit Linux hosts over SSH. On Windows DevAudit can also remotely connect to and audit other Windows machines using WinRM. Agentless Docker container auditing. DevAudit can audit running Docker containers from the Docker host with identical features available in container environments as in local environments. GitHub repository auditing. DevAudit can connect directly to a project repository hosted on GitHub and perform package source and application configuration auditing. PowerShell support. DevAudit can also be run inside the PowerShell system administration environment as cmdlets. Work on PowerShell support is paused at present but will resume in the near future with support for cross-platform Powershell both on Windows and Linux. RequirementsDevAudit is a .NET 4.6 application. To install locally on your machine you will need either the Microsoft .NET Framework 4.6 runtime on Windows, or Mono 4.4+ on Linux. .NET 4.6 should be already installed on most recent versions of Windows, if not then it is available as a Windows feature that can be turned on or installed from the Programs and Features control panel applet on consumer Windows, or from the Add Roles and Features option in Server Manager on server versions of Windows. For older versions of Windows, the .NET 4.6 installer from Microsoft can be found here.On Linux the minimum version of Mono supported is 4.4. Although DevAudit runs on Mono 4 (with one known issue) it’s recommended that Mono 5 be installed. Mono 5 brings many improvements to the build and runtime components of Mono that benefit DevAudit.The existing Mono packages provided by your distro are probably not Mono 5 as yet, so you will have to install Mono packages manually to be able to use Mono 5. Installation instructions for the most recent packages provided by the Mono project for several major Linux distros are here. It is recommended you have the mono-devel package installed as this will reduce the chances of missing assemblies.Alternatively on Linux you can use the DevAudit Docker image if you do not wish to install Mono and already have Docker installed on your machine.InstallationDevAudit can be installed by the following methods:Building from source.Using a binary release archive file downloaded from Github for Windows or Linux.Using the release MSI installer downloaded from Github for Windows.Using the Chocolatey package manager on Windows.Pulling the ossindex/devaudit image from Docker Hub on Linux.Building from source on LinuxPre-requisites: Mono 4.4+ (Mono 5 recommended) and the mono-devel package which provides the compiler and other tools needed for building Mono apps. Your distro should have packages for at least Mono version 4.4 and above, otherwise manual installation instructions for the most recent packages provided by the Mono project for several major Linux distros are here Clone the DevAudit repository from https://github.com/OSSIndex/DevAudit.git Run the build.sh script in the root DevAudit directory. DevAudit should compile without any errors. Run ./devaudit –help and you should see the DevAudit version and help screen printed. Note that NuGet on Linux may occasionally exit with Error: NameResolutionFailure which seems to be a transient problem contacting the servers that contain the NuGet packages. You should just run ./build.sh again until the build completes normally.Building from source on WindowsPre-requisites: You must have one of: A .NET Framework 4.6 SDK or developer pack.Visual Studio 2015.Clone the DevAudit repository from https://github.com/OSSIndex/DevAudit.git From a visual Studio 2015 or ,NETRun the build.cmd script in the root DevAudit directory. DevAudit should compile without any errors. Run ./devaudit –help and you should see the DevAudit version and help screen printed. Installing from the release archive files on Windows on LinuxPre-requisites: You must have Mono 4.4+ on Linux or .NET 4.6 on Windows. Download the latest release archive file for Windows or Linux from the project releases page. Unpack this file to a directory. From the directory where you unpacked the release archive run devaudit –help on Windows or ./devaudit –help on Linux. You should see the version and help screen printed. (Optional) Add the DevAudit installation directory to your PATH environment variable Installing using the MSI Installer on WindowsThe MSI installer for a release can be found on the Github releases page.Click on the releases link near the top of the page.Identify the release you would like to install.A “DevAudit.exe" link should be visible for each release that has a pre-built installer.Download the file and execute the installer. You will be guided through a simple installation.Open a new command prompt or PowerShell window in order to have DevAudit in path.Run DevAudit.Installing using Chocolatey on WindowsDevAudit is also available on Chocolatey.Install Chocolatey.Open an admin console or PowerShell window.Type choco install devauditRun DevAudit.Installing using Docker on LinuxPull the Devaudit image from Docker Hub: docker pull ossindex/devaudit. The image tagged ossindex/devaudit:latest (which is the default image that is downloaded) is built from the most recent release while ossindex/devaudit:unstable is built on the master branch of the source code and contains the newest additions albeit with less testing.ConceptsAudit TargetRepresents a logical group of auditing functions. DevAudit currently supports the following audit targets:Package Source. A package source manages application and library dependencies using a package manager. Package managers install, remove or update applications and library dependencies for an operating system like Debian Linux, or for a development language or framework like .NET or nodejs. Examples of package sources are dpkg, yum, Chocolatey, Composer, and Bower. DevAudit audits the names and versions of installed packages against vulnerabilities reported for specific versions of those packages.Application. An application like Drupal or a custom application built using a framework like ASP.NET. DevAudit audits applications and application modules and plugins against vulnerabilities reported for specific versions of application binaries and modules and plugins. DevAudit can also audit application configurations for known vulnerabilities, and perform static analysis on application code looking for known weaknesses.Application Server. Application servers provide continuously running services or daemons like a web or database server for other applications to use, or for users to access services like authentication. Examples of application servers are the OpenSSH sshd and Apache httpd servers. DevAudit can audit application server binaries, modules and plugins against vulnerabilities reported for specific versions as well as audit server configurations for known server configuration vulnerabilities and weaknesses.Audit EnvironmentRepresents a logical environment where audits against audit targets are executed. Audit environments abstract the I/O and command executions required for an audit and allow identical functions to be performed against audit targets on whatever physical or network location the target’s files and executables are located. The follwing environments are currently supported :Local. This is the default audit environment where audits are executed on the local machine.SSH. Audits are executed on a remote host connected over SSH. It is not necessary to have DevAudit installed on the remote host.WinRM. Audits are executed on a remote Windows host connected over WinRM. It is not necessary to have DevAudit installed on the remote host.Docker. Audits are executed on a running Docker container. It is not necessary to have DevAudit installed on the container image.GitHub. Audits are executed on a GitHub project repository’s file-system directly. It is not necessary to checkout or download the project locally to perform the audit.Audit OptionsThese are different options that can be enabled for the audit. You can specify options that apply to the DevAudit program for example, to run in non-interactive mode, as well as options that apply to the target e.g if you set the AppDevMode option for auditing ASP.NET applications to true then certain audit rules will not be enabled.Basic UsageThe CLI is the primary interface to the DevAudit program and is suitable both for interactive use and for non-interactive use in scheduled tasks, shell scripts, CI build pipelines and post-build tasks in developer IDEs. The basic DevAudit CLI syntax is:devaudit TARGET [ENVIRONMENT] | [OPTIONS]where TARGET specifies the audit target ENVIRONMENT specifies the audit environment and OPTIONS specifies the options for the audit target and environment. There are 2 ways to specify options: program options and general audit options that apply to more than one target can be specified directly on the command-line as parameters . Target-specific options can be specified with the -o options using the format: -o OPTION1=VALUE1,OPTION2=VALUE2,…. with commas delimiting each option key-value pair.If you are piping or redirecting the program output to a file then you should always use the -n –non-interactive option to disable any interactive user interface features and animations.When specifying file paths, an @ prefix before a path indicates to DevAudit that this path is relative to the root directory of the audit target e.g if you specify: -r c:\myproject -b @bin\Debug\app2.exe DevAudit considers the path to the binary file as c:\myproject\bin\Debug\app2.exe.Audit TargetsPackage Sources msi Do a package audit of the Windows Installer MSI package source on Windows machines. choco Do a package audit of packages installed by the Choco package manager. oneget Do a package audit of the system OneGet package source on Windows. nuget Do a package audit of a NuGet v2 package source. You must specify the location of the NuGet packages.config file you wish to audit using the -f or –file option otherwise the current directory will be searched for this file. bower Do a package audit of a Bower package source. You must specify the location of the Bower packages.json file you wish to audit using the -f or –file option otherwise the current directory will be searched for this file. composer Do a package audit of a Composer package source. You must specify the location of the Composer composer.json file you wish to audit using the -f or –file option otherwise the current directory will be searched for this file. dpkg Do a package audit of the system dpkg package source on Debian Linux and derivatives. rpm Do a package audit of the system RPM package source on RedHat Linux and derivatives. yum Do a package audit of the system Yum package source on RedHat Linux and derivatives. For every package source the following general audit options can be used: -f –file Specify the location of the package manager configuration file if needed. The NuGet, Bower and Composer package sources require this option. –list-packages Only list the packages in the package source scanned by DevAudit. –list-artifacts Only list the artifacts found on OSS Index for packages scanned by DevAudit. Package sources tagged [Experimental] are only available in the master branch of the source code and may have limited back-end OSS Index support. However you can always list the packages scanned and artifacts available on OSS Index using the list-packages and list-artifacts options.Applications aspnet Do an application audit on a ASP.NET application. The relevant options are: -r –root-directory Specify the root directory of the application. This is just the top-level application directory that contains files like Global.asax and Web.config.-b –application-binary Specify the application binary. The is the .NET assembly that contains the application’s .NET bytecode. This file is usually a .DLL and located in the bin sub-folder of the ASP.NET application root directory.-c –configuration-file or -o AppConfig=configuration-file Specifies the ASP.NET application configuration file. This file is usually named Web.config and located in the application root directory. You can override the default @Web.config value with this option.-o AppDevMode=enabled Specifies that application development mode should be enabled for the audit. This mode can be used when auditing an application that is under development. Certain configuration rules that are tagged as disabled for AppDevMode (e.g running the application in ASP.NET debug mode) will not be enabled during the audit. netfx Do an application audit on a .NET application. The relevant options are: -r –root-directory Specify the root directory of the application. This is just the top-level application directory that contains files like App.config.-b –application-binary Specify the application binary. The is the .NET assembly that contains the application’s .NET bytecode. This file is usually a .DLL and located in the bin sub-folder of the ASP.NET application root directory.-c –configuration-file or -o AppConfig=configuration-file Specifies the .NET application configuration file. This file is usually named App.config and located in the application root directory. You can override the default @App.config value with this option.-o GendarmeRules=RuleLibrary Specifies that the Gendarme static analyzer should enabled for the audit with rules from the specified rules library used. For example: devaudit netfx -r /home/allisterb/vbot-debian/vbot.core -b @bin/Debug/vbot.core.dll –skip-packages-audit -o GendarmeRules=Gendarme.Rules.Naming will run the Gendarme static analyzer on the vbot.core.dll assembly using rules from Gendarme.Rules.Naming library. The complete list of rules libraries is (taken from the Gendarme wiki):Gendarme.Rules.BadPracticeGendarme.Rules.ConcurrencyGendarme.Rules.CorrectnessGendarme.Rules.DesignGendarme.Rules.Design.GenericGendarme.Rules.Design.LinqGendarme.Rules.ExceptionsGendarme.Rules.GendarmeGendarme.Rules.GlobalizationGendarme.Rules.InteroperabilityGendarme.Rules.Interoperability.ComGendarme.Rules.MaintainabilityGendarme.Rules.NUnitGendarme.Rules.NamingGendarme.Rules.PerformanceGendarme.Rules.PortabilityGendarme.Rules.SecurityGendarme.Rules.Security.CasGendarme.Rules.SerializationGendarme.Rules.SmellsGendarme.Rules.Ui drupal7 Do an application audit on a Drupal 7 application. -r –root-directory Specify the root directory of the application. This is just the top-level directory of your Drupal 7 install. drupal8 Do an application audit on a Drupal 8 application. -r –root-directory Specify the root directory of the application. This is just the top-level directory of your Drupal 8 install.All applications also support the following common options for auditing the application modules or plugins: –list-packages Only list the application plugins or modules scanned by DevAudit. –list-artifacts Only list the artifacts found on OSS Index for application plugins and modules scanned by DevAudit. –skip-packages-audit Only do an appplication configuration or code analysis audit and skip the packages audit. Application Servers sshd Do an application server audit on an OpenSSH sshd-compatible server. httpd Do an application server audit on an Apache httpd-compatible server. mysql Do an application server audit on a MySQL-compatible server (like MariaDB or Oracle MySQL.) nginx Do an application server audit on a Nginx server. pgsql Do an application server audit on a PostgreSQL server. This is an example command line for an application server audit: ./devaudit httpd -i httpd-2.2 -r /usr/local/apache2/ -c @conf/httpd.conf -b @bin/httpd which audits an Apache Httpd server running on a Docker container named httpd-2.2.The following are audit options common to all application servers:-r –root-directory Specifies the root directory of the server. This is just the top-level of your server filesystem and defaults to / unless you want a different server root.-c –configuration-file Specifies the server configuration file. e.g in the above audit the Apache configuration file is located at /usr/local/apache2/conf/httpd.conf. If you don’t specify the configuration file DevAudit will attempt to auto-detect the configuration file for the server selected.-b –application-binary Specifies the server binary. e.g in the above audit the Apache binary is located at /usr/local/apache2/bin/httpd. If you don’t specify the binary path DevAudit will attempt to auto-detect the server binary for the server selected.Application servers also support the following common options for auditing the server modules or plugins: –list-packages Only list the application plugins or modules scanned by DevAudit. –list-artifacts Only list the artifacts found on OSS Index for application plugins and modules scanned by DevAudit. –skip-packages-audit Only do a server configuration audit and skip the packages audit. EnvironmentsThere are currently 5 audit environment supported: local, remote hosts over SSH, remote hosts over WinRM, Docker containers, and GitHub. Local environments are used by default when no other environment options are specified.SSHThe SSH environment allows audits to be performed on any remote hosts accessible over SSH without requiring DevAudit to be installed on the remote host. SSH environments are cross-platform: you can connect to a Linux remote host from a Windows machine running DevAudit. An SSH environment is created by the following options:-s SERVER [–ssh-port PORT] -u USER [-k KEYFILE] [-p | –password-text PASSWORD]-s SERVER Specifies the remote host or IP to connect to via SSH.-u USER Specifies the user to login to the server with.–ssh-port PORT Specifies the port on the remote host to connect to. The default is 22.-k KEYFILE Specifies the OpenSSH compatible private key file to use to connect to the remote server. Currently only RSA or DSA keys in files in the PEM format are supported.-p Provide a prompt with local echo disabled for interactive entry of the server password or key file passphrase.–password-text PASSWORD Specify the user password or key file passphrase as plaintext on the command-line. Note that on Linux when your password contains special characters you should use enclose the text on the command-line using single-quotes like ‘MyPa

Link: http://www.kitploit.com/2018/12/devaudit-open-source-cross-platform.html

Knock v.4.1.1 – Subdomain Scan

Knockpy is a python tool designed to enumerate subdomains on a target domain through a wordlist. It is designed to scan for DNS zone transfer and to try to bypass the wildcard DNS record automatically if it is enabled. Now knockpy supports queries to VirusTotal subdomains, you can setting the API_KEY within the config.json file.Very simply$ knockpy domain.comExport full report in JSONIf you want to save full log like this one just type:$ knockpy domain.com –json InstallPrerequisitesPython 2.7.6DependenciesDnspython$ sudo apt-get install python-dnspythonInstalling$ git clone https://github.com/guelfoweb/knock.git$ cd knock$ nano knockpy/config.json <- set your virustotal API_KEY$ sudo python setup.py installNote that it's recommended to use Google DNS: 8.8.8.8 and 8.8.4.4 Knockpy arguments$ knockpy -husage: knockpy [-h] [-v] [-w WORDLIST] [-r] [-c] [-j] domain___________________________________________knock subdomain scanknockpy v.4.1Author: Gianni 'guelfoweb' AmatoGithub: https://github.com/guelfoweb/knock___________________________________________positional arguments: domain target to scan, like domain.comoptional arguments: -h, --help show this help message and exit -v, --version show program's version number and exit -w WORDLIST specific path to wordlist file -r, --resolve resolve ip or domain name -c, --csv save output in csv -f, --csvfields add fields name to the first row of csv output file -j, --json export full report in JSONexample: knockpy domain.com knockpy domain.com -w wordlist.txt knockpy -r domain.com or IP knockpy -c domain.com knockpy -j domain.comFor virustotal subdomains support you can setting your API_KEY in the config.json file. ExampleSubdomain scan with internal wordlist$ knockpy domain.comSubdomain scan with external wordlist$ knockpy domain.com -w wordlist.txtResolve domain name and get response headers$ knockpy -r domain.com [or IP]+ checking for virustotal subdomains: YES[ "partnerissuetracker.corp.google.com", "issuetracker.google.com", "r5---sn-ogueln7k.c.pack.google.com", "cse.google.com", .......too long....... "612.talkgadget.google.com", "765.talkgadget.google.com", "973.talkgadget.google.com"]+ checking for wildcard: NO+ checking for zonetransfer: NO+ resolving target: YES{ "zonetransfer": { "enabled": false, "list": [] }, "target": "google.com", "hostname": "google.com", "virustotal": [ "partnerissuetracker.corp.google.com", "issuetracker.google.com", "r5---sn-ogueln7k.c.pack.google.com", "cse.google.com", "mt0.google.com", "earth.google.com", "clients1.google.com", "pki.google.com", "www.sites.google.com", "appengine.google.com", "fcmatch.google.com", "dl.google.com", "translate.google.com", "feedproxy.google.com", "hangouts.google.com", "news.google.com", .......too long....... "100.talkgadget.google.com", "services.google.com", "301.talkgadget.google.com", "857.talkgadget.google.com", "600.talkgadget.google.com", "992.talkgadget.google.com", "93.talkgadget.google.com", "storage.cloud.google.com", "863.talkgadget.google.com", "maps.google.com", "661.talkgadget.google.com", "325.talkgadget.google.com", "sites.google.com", "feedburner.google.com", "support.google.com", "code.google.com", "562.talkgadget.google.com", "190.talkgadget.google.com", "58.talkgadget.google.com", "612.talkgadget.google.com", "765.talkgadget.google.com", "973.talkgadget.google.com" ], "alias": [], "wildcard": { "detected": {}, "test_target": "eqskochdzapjbt.google.com", "enabled": false, "http_response": {} }, "ipaddress": [ "216.58.205.142" ], "response_time": "0.0351989269257", "http_response": { "status": { "reason": "Found", "code": 302 }, "http_headers": { "content-length": "256", "location": "http://www.google.it/?gfe_rd=cr&ei=60WIWdmnDILCXoKbgfgK", "cache-control": "private", "date": "Mon, 07 Aug 2017 10:50:19 GMT", "referrer-policy": "no-referrer", "content-type": "text/html; charset=UTF-8" } }}Save scan output in CSV$ knockpy -c domain.comExport full report in JSON$ knockpy -j domain.com Talk aboutEthical Hacking and Penetration Testing Guide Book by Rafay Baloch.Knockpy comes pre-installed on the following security distributions for penetration test:BackBox LinuxPentestBox for WindowsBuscador Investigative Operating System OtherThis tool is currently maintained by Gianni 'guelfoweb' Amato, who can be contacted at guelfoweb@gmail.com or twitter @guelfoweb. Suggestions and criticism are welcome.Download Knock

Link: http://www.kitploit.com/2018/12/knock-v411-subdomain-scan.html

Cameradar v2.1.0 – Hacks Its Way Into RTSP Videosurveillance Cameras

  An RTSP stream access tool that comes with its libraryCameradar allows you toDetect open RTSP hosts on any accessible target hostDetect which device model is streamingLaunch automated dictionary attacks to get their stream route (e.g.: /live.sdp)Launch automated dictionary attacks to get the username and password of the camerasRetrieve a complete and user-friendly report of the resultsDocker Image for CameradarInstall docker on your machine, and run the following command:docker run -t ullaakut/cameradar -t <other command-line options>See command-line options.e.g.: docker run -t ullaakut/cameradar -t 192.168.100.0/24 -l will scan the ports 554 and 8554 of hosts on the 192.168.100.0/24 subnetwork and attack the discovered RTSP streams and will output debug logs.YOUR_TARGET can be a subnet (e.g.: 172.16.100.0/24), an IP (e.g.: 172.16.100.10), or a range of IPs (e.g.: 172.16.100.10-20).If you want to get the precise results of the nmap scan in the form of an XML file, you can add -v /your/path:/tmp/cameradar_scan.xml to the docker run command, before ullaakut/cameradar.If you use the -r and -c options to specify your custom dictionaries, make sure to also use a volume to add them to the docker container. Example: docker run -t -v /path/to/dictionaries/:/tmp/ ullaakut/cameradar -r /tmp/myroutes -c /tmp/mycredentials.json -t mytargetInstalling the binary on your machineOnly use this solution if for some reason using docker is not an option for you or if you want to locally build Cameradar on your machine.DependenciesgodepInstalling depOSX: brew install dep and brew upgrade depOthers: Download the release package for your OS hereSteps to installMake sure you installed the dependencies mentionned above.go get github.com/Ullaakut/cameradarcd $GOPATH/src/github.com/Ullaakut/cameradardep ensurecd cameradargo installThe cameradar binary is now in your $GOPATH/bin ready to be used. See command line options here.LibraryDependencies of the librarycurl-dev / libcurl (depending on your OS)nmapgithub.com/pkg/errorsgopkg.in/go-playground/validator.v9github.com/andelf/go-curlInstalling the librarygo get github.com/Ullaakut/cameradarAfter this command, the cameradar library is ready to use. Its source will be in:$GOPATH/src/pkg/github.com/Ullaakut/cameradarYou can use go get -u to update the package.Here is an overview of the exposed functions of this library:DiscoveryYou can use the cameradar library for simple discovery purposes if you don’t need to access the cameras but just to be aware of their existence.This describes the nmap time presets. You can pass a value between 1 and 5 as described in this table, to the NmapRun function.AttackIf you already know which hosts and ports you want to attack, you can also skip the discovery part and use directly the attack functions. The attack functions also take a timeout value as a parameter.Data modelsHere are the different data models useful to use the exposed functions of the cameradar library.Dictionary loadersThe cameradar library also provides two functions that take file paths as inputs and return the appropriate data models filled.ConfigurationThe RTSP port used for most cameras is 554, so you should probably specify 554 as one of the ports you scan. Not specifying any ports to the cameradar application will scan the 554 and 8554 ports.docker run -t –net=host ullaakut/cameradar -p “18554,19000-19010" -t localhost will scan the ports 18554, and the range of ports between 19000 and 19010 on localhost.You can use your own files for the ids and routes dictionaries used to attack the cameras, but the Cameradar repository already gives you a good base that works with most cameras, in the /dictionaries folder.docker run -t -v /my/folder/with/dictionaries:/tmp/dictionaries \ ullaakut/cameradar \ -r "/tmp/dictionaries/my_routes" \ -c "/tmp/dictionaries/my_credentials.json" \ -t 172.19.124.0/24This will put the contents of your folder containing dictionaries in the docker image and will use it for the dictionary attack instead of the default dictionaries provided in the cameradar repo.Check camera accessIf you have VLC Media Player, you should be able to use the GUI or the command-line to connect to the RTSP stream using this format : rtsp://username:password@address:port/routeWith the above result, the RTSP URL would be rtsp://admin:12345@173.16.100.45:554/live.sdpCommand line options"-t, –target": Set target. Required. Target can be a file (see instructions on how to format the file), an IP, an IP range, a subnetwork, or a combination of those."-p, –ports": (Default: 554,8554) Set custom ports."-s, –speed": (Default: 4) Set custom nmap discovery presets to improve speed or accuracy. It’s recommended to lower it if you are attempting to scan an unstable and slow network, or to increase it if on a very performant and reliable network. See this for more info on the nmap timing templates."-T, –timeout": (Default: 2000) Set custom timeout value in miliseconds after which an attack attempt without an answer should give up. It’s recommended to increase it when attempting to scan unstable and slow networks or to decrease it on very performant and reliable networks."-r, –custom-routes": (Default: <CAMERADAR_GOPATH>/dictionaries/routes) Set custom dictionary path for routes"-c, –custom-credentials": (Default: <CAMERADAR_GOPATH>/dictionaries/credentials.json) Set custom dictionary path for credentials"-o, –nmap-output": (Default: /tmp/cameradar_scan.xml) Set custom nmap output path"-l, –log": Enable debug logs (nmap requests, curl describe requests, etc.)"-h" : Display the usage informationFormat input fileThe file can contain IPs, hostnames, IP ranges and subnetwork, separated by newlines. Example:0.0.0.0localhost192.17.0.0/16192.168.1.140-255192.168.2-3.0-255Environment VariablesCAMERADAR_TARGETThis variable is mandatory and specifies the target that cameradar should scan and attempt to access RTSP streams on.Examples:172.16.100.0/24192.168.1.1localhost192.168.1.140-255192.168.2-3.0-255CAMERADAR_PORTSThis variable is optional and allows you to specify the ports on which to run the scans.Default value: 554,8554It is recommended not to change these except if you are certain that cameras have been configured to stream RTSP over a different port. 99.9% of cameras are streaming on these ports.CAMERADAR_NMAP_OUTPUT_FILEThis variable is optional and allows you to specify on which file nmap will write its output.Default value: /tmp/cameradar_scan.xmlThis can be useful only if you want to read the files yourself, if you don’t want it to write in your /tmp folder, or if you want to use only the RunNmap function in cameradar, and do its parsing manually.CAMERADAR_CUSTOM_ROUTES, CAMERADAR_CUSTOM_CREDENTIALSThese variables are optional, allowing to replace the default dictionaries with custom ones, for the dictionary attack.Default values: <CAMERADAR_GOPATH>/dictionaries/routes and <CAMERADAR_GOPATH>/dictionaries/credentials.jsonCAMERADAR_SPEEDThis optional variable allows you to set custom nmap discovery presets to improve speed or accuracy. It’s recommended to lower it if you are attempting to scan an unstable and slow network, or to increase it if on a very performant and reliable network. See this for more info on the nmap timing templates.Default value: 4CAMERADAR_TIMEOUTThis optional variable allows you to set custom timeout value in miliseconds after which an attack attempt without an answer should give up. It’s recommended to increase it when attempting to scan unstable and slow networks or to decrease it on very performant and reliable networks.Default value: 2000CAMERADAR_LOGSThis optional variable allows you to enable a more verbose output to have more information about what is going on.It will output nmap results, cURL requests, etc.Default: falseContributionBuildDocker buildTo build the docker image, simply run docker build -t . cameradar in the root of the project.Your image will be called cameradar and NOT ullaakut/cameradar.Go buildTo build the project without docker:Install depOSX: brew install dep and brew upgrade depOthers: Download the release package for your OS heredep ensurego build to build the librarycd cameradar && go build to build the binaryThe cameradar binary is now in the root of the directory.See the contribution document to get started.Frequently Asked QuestionsCameradar does not detect any camera!That means that either your cameras are not streaming in RTSP or that they are not on the target you are scanning. In most cases, CCTV cameras will be on a private subnetwork, isolated from the internet. Use the -t option to specify your target.Cameradar detects my cameras, but does not manage to access them at all!Maybe your cameras have been configured and the credentials / URL have been changed. Cameradar only guesses using default constructor values if a custom dictionary is not provided. You can use your own dictionaries in which you just have to add your credentials and RTSP routes. To do that, see how the configuration works. Also, maybe your camera’s credentials are not yet known, in which case if you find them it would be very nice to add them to the Cameradar dictionaries to help other people in the future.What happened to the C++ version?You can still find it under the 1.1.4 tag on this repo, however it was less performant and stable than the current version written in Golang.How to use the Cameradar library for my own project?See the example in /cameradar. You just need to run go get github.com/Ullaakut/cameradar and to use the cmrdr package in your code. You can find the documentation on godoc.I want to scan my own localhost for some reason and it does not work! What’s going on?Use the –net=host flag when launching the cameradar image, or use the binary by running go run cameradar/cameradar.go or installing itI don’t see a colored output :(You forgot the -t flag before ullaakut/cameradar in your command-line. This tells docker to allocate a pseudo-tty for cameradar, which makes it able to use colors.I don’t have a camera but I’d like to try Cameradar!Simply run docker run -p 8554:8554 -e RTSP_USERNAME=admin -e RTSP_PASSWORD=12345 -e RTSP_PORT=8554 ullaakut/rtspatt and then run cameradar and it should guess that the username is admin and the password is 12345. You can try this with any default constructor credentials (they can be found here)ExamplesRunning cameradar on your own machine to scan for default portsdocker run –net=host -t ullaakut/cameradar -t localhostRunning cameradar with an input file, logs enabled on port 8554docker run -v /tmp:/tmp –net=host -t ullaakut/cameradar -t /tmp/test.txt -p 8554 -lDownload Cameradar

Link: http://feedproxy.google.com/~r/PentestTools/~3/1bUGqwOggUY/cameradar-v210-hacks-its-way-into-rtsp.html

Evilginx2 v2.2.0 – Standalone Man-In-The-Middle Attack Framework Used For Phishing Login Credentials Along With Session Cookies, Allowing For The Bypass Of 2-Factor Authentication

evilginx2 is a man-in-the-middle attack framework used for phishing login credentials along with session cookies, which in turn allows to bypass 2-factor authentication protection.This tool is a successor to Evilginx, released in 2017, which used a custom version of nginx HTTP server to provide man-in-the-middle functionality to act as a proxy between a browser and phished website. Present version is fully written in GO as a standalone application, which implements its own HTTP and DNS server, making it extremely easy to set up and use.VideoSee evilginx2 in action here:Evilginx 2 – Next Generation of Phishing 2FA Tokens from breakdev.org on Vimeo.Write-upIf you want to learn more about this phishing technique, I’ve published an extensive blog post about evilginx2 here:https://breakdev.org/evilginx-2-next-generation-of-phishing-2fa-tokensPhishlet Masters – Hall of FamePlease thank the following contributors for devoting their precious time to deliver us fresh phishlets! (in order of first contributions)@cust0msync – Amazon, Reddit@white_fi – Twitterrvrsh3ll @424f424f – CitrixInstallationYou can either use a precompiled binary package for your architecture or you can compile evilginx2 from source.You will need an external server where you’ll host your evilginx2 installation. I personally recommend Digital Ocean and if you follow my referral link, you will get an extra $10 to spend on servers for free.Evilginx runs very well on the most basic Debian 8 VPS.Installing from sourceIn order to compile from source, make sure you have installed GO of version at least 1.10.0 (get it from here) and that $GOPATH environment variable is set up properly (def. $HOME/go).After installation, add this to your ~/.profile, assuming that you installed GO in /usr/local/go:export GOPATH=$HOME/goexport PATH=$PATH:/usr/local/go/bin:$GOPATH/binThen load it with source ~/.profiles.Now you should be ready to install evilginx2. Follow these instructions:sudo apt-get install git makego get -u github.com/kgretzky/evilginx2cd $GOPATH/src/github.com/kgretzky/evilginx2makeYou can now either run evilginx2 from local directory like:sudo ./bin/evilginx -p ./phishlets/or install it globally:sudo make installsudo evilginxInstructions above can also be used to update evilginx2 to the latest version.Installing with DockerYou can launch evilginx2 from within Docker. First build the container:docker build . -t evilginx2Then you can run the container:docker run -it -p 53:53/udp -p 80:80 -p 443:443 evilginx2Phishlets are loaded within the container at /app/phishlets, which can be mounted as a volume for configuration.Installing from precompiled binary packagesGrab the package you want from here and drop it on your box. Then do:unzip .zip -d <package_name>cd <package_name>If you want to do a system-wide install, use the install script with root privileges:chmod 700 ./install.shsudo ./install.shsudo evilginxor just launch evilginx2 from the current directory (you will also need root privileges):chmod 700 ./evilginxsudo ./evilginxUsageIMPORTANT! Make sure that there is no service listening on ports TCP 443, TCP 80 and UDP 53. You may need to shutdown apache or nginx and any service used for resolving DNS that may be running. evilginx2 will tell you on launch if it fails to open a listening socket on any of these ports.By default, evilginx2 will look for phishlets in ./phishlets/ directory and later in /usr/share/evilginx/phishlets/. If you want to specify a custom path to load phishlets from, use the -p <phishlets_dir_path> parameter when launching the tool.Usage of ./evilginx: -debug Enable debug output -developer Enable developer mode (generates self-signed certificates for all hostnames) -p string Phishlets directory pathYou should see evilginx2 logo with a prompt to enter commands. Type help or help <command> if you want to see available commands or more detailed information on them.Getting startedTo get up and running, you need to first do some setting up.At this point I assume, you’ve already registered a domain (let’s call it yourdomain.com) and you set up the nameservers (both ns1 and ns2) in your domain provider’s admin panel to point to your server’s IP (e.g. 10.0.0.1):ns1.yourdomain.com = 10.0.0.1ns2.yourdomain.com = 10.0.0.1Set up your server’s domain and IP using following commands:config domain yourdomain.comconfig ip 10.0.0.1Now you can set up the phishlet you want to use. For the sake of this short guide, we will use a LinkedIn phishlet. Set up the hostname for the phishlet (it must contain your domain obviously):phishlets hostname linkedin my.phishing.hostname.yourdomain.comAnd now you can enable the phishlet, which will initiate automatic retrieval of LetsEncrypt SSL/TLS certificates if none are locally found for the hostname you picked:phishlets enable linkedinYour phishing site is now live. Think of the URL, you want the victim to be redirected to on successful login and get the phishing URL like this (victim will be redirected to https://www.google.com):phishlets get-url linkedin https://www.google.comRunning phishlets will only respond to tokenized links, so any scanners who scan your main domain will be redirected to URL specified as redirect_url under config. If you want to hide your phishlet and make it not respond even to valid tokenized phishing URLs, use phishlet hide/unhide <phishlet> command.You can monitor captured credentials and session cookies with:sessionsTo get detailed information about the captured session, with the session cookie itself (it will be printed in JSON format at the bottom), select its session ID:sessions <id>The captured session cookie can be copied and imported into Chrome browser, using EditThisCookie extension.Important! If you want evilginx2 to continue running after you log out from your server, you should run it inside a screen session.CreditsHuge thanks to Simone Margaritelli (@evilsocket) for bettercap and inspiring me to learn GO and rewrite the tool in that language!Download Evilginx2

Link: http://www.kitploit.com/2018/12/evilginx2-v220-standalone-man-in-middle.html

MEC v1.4.0 – Mass Exploit Console

massExploitConsolea collection of hacking tools with a cli ui.Disclaimerplease use this tool only on authorized systems, im not responsible for any damage caused by users who ignore my warningexploits are adapted from other sources, please refer to their author infoplease note, due to my limited programming experience (it’s my first Python project), you can expect some silly bugsFeaturesan easy-to-use cli uiexecute any adpated exploits with process-level concurrencysome built-in exploits (automated)hide your ip addr using proxychains4 and ss-proxy (built-in)zoomeye host scan (10 threads)a simple baidu crawler (multi-threaded)censys host scanGetting startedgit clone https://github.com/jm33-m0/massExpConsole.git && cd massExpConsole && ./install.pywhen installing pypi deps, apt-get install libncurses5-dev (for Debian-based distros) might be needednow you should be good to go (if not, please report missing deps here)type proxy command to run a pre-configured Shadowsocks socks5 proxy in the background, vim ./data/ss.json to edit proxy config. and, ss-proxy exits with mec.pyRequirementsGNU/Linux, WSL, MacOS (not tested), fully tested under Arch Linux, Kali Linux (Rolling, 2018), Ubuntu Linux (16.04 LTS) and Fedora 25 (it will work on other distros too as long as you have dealt with all deps)Python 3.5 or later (or something might go wrong, https://github.com/jm33-m0/massExpConsole/issues/7#issuecomment-305962655)proxychains4 (in $PATH), used by exploiter, requires a working socks5 proxy (you can modify its config in mec.py)Java is required when using Java deserialization exploits, you might want to install openjdk-8-jre if you haven’t installed it yetnote that you have to install all the deps of your exploits or tools as wellUsagejust run mec.py, if it complains about missing modules, install themif you want to add your own exploit script (or binary file, whatever):cd exploits, mkdir your exploit should take the last argument passed to it as its target, dig into mec.py to know morechmod +x <exploit> to make sure it can be executed by current useruse attack command then m to select your custom exploittype help in the console to see all available featureszoomeye requires a valid user account config file zoomeye.conf Download MEC

Link: http://www.kitploit.com/2018/12/mec-v140-mass-exploit-console.html

Hayat – Auditing & Hardening Script For Google Cloud Platform

Hayat is a auditing & hardening script for Google Cloud Platform services such as:Identity & Access ManagementNetworkingVirtual MachinesStorageCloud SQL InstancesKubernetes Clustersfor now.Identity & Access ManagementEnsure that corporate login credentials are used instead of Gmail accounts.Ensure that there are only GCP-managed service account keys for each service account.Ensure that ServiceAccount has no Admin privileges.Ensure that IAM users are not assigned Service Account User role at project level.NetworkingEnsure the default network does not exist in a project.Ensure legacy networks does not exists for a project.Ensure that DNSSEC is enabled for Cloud DNS.Ensure that RSASHA1 is not used for key-signing key in Cloud DNS DNSSEC.Ensure that RSASHA1 is not used for zone-signing key in Cloud DNS DNSSEC.Ensure that RDP access is restricted from the Internet.Ensure Private Google Access is enabled for all subnetwork in VPC Network.Ensure VPC Flow logs is enabled for every subnet in VPC Network.Virtual MachinesEnsure that instances are not configured to use the default service account with full access to all Cloud APIs.Ensure “Block Project-wide SSH keys" enabled for VM instances.Ensure oslogin is enabled for a Project.Ensure ‘Enable connecting to serial ports’ is not enabled for VM Instance.Ensure that IP forwarding is not enabled on Instances.StorageEnsure that Cloud Storage bucket is not anonymously or publicly accessible.Ensure that logging is enabled for Cloud storage bucket.Cloud SQL Database ServicesEnsure that Cloud SQL database instance requires all incoming connections to use SSL.Ensure that Cloud SQL database Instances are not open to the world.Ensure that MySql database instance does not allow anyone to connect with administrative privileges.Ensure that MySQL Database Instance does not allows root login from any host.Kubernetes EngineEnsure Stackdriver Logging is set to Enabled on Kubernetes Engine Clusters.Ensure Stackdriver Monitoring is set to Enabled on Kubernetes Engine Clusters.Ensure Legacy Authorization is set to Disabled on Kubernetes Engine Clusters.Ensure Master authorized networks is set to Enabled on Kubernetes Engine Clusters.Ensure Kubernetes Clusters are configured with Labels.Ensure Kubernetes web UI / Dashboard is disabled.Ensure Automatic node repair is enabled for Kubernetes Clusters.Ensure Automatic node upgrades is enabled on Kubernetes Engine Clusters nodes.RequirementsHayat has been written in bash script using gcloud and it’s compatible with Linux and OSX.Usagegit clone https://github.com/DenizParlak/Hayat.git && cd Hayat && chmod +x hayat.sh && ./hayat.shYou can use with specific functions, e.g if you want to scan just Kubernetes Cluster:./hayat.sh –only-kubernetesScreenshotsDownload Hayat

Link: http://feedproxy.google.com/~r/PentestTools/~3/eanL2lSrxVg/hayat-auditing-hardening-script-for.html

MCExtractor – Intel, AMD, VIA &Amp; Freescale Microcode Extraction Tool

Intel, AMD, VIA & Freescale Microcode Extraction ToolMC Extractor News FeedMC Extractor Discussion TopicIntel, AMD & VIA CPU Microcode RepositoriesA. About MC ExtractorMC Extractor is a tool which parses Intel, AMD, VIA and Freescale processor microcode binaries. It can be used by end-users who are looking for all relevant microcode information such as CPUID, Platform, Version, Date, Release, Size, Checksum etc. It is capable of converting Intel microcode containers (dat, inc, h, txt) to binary images for BIOS integration, detecting new/unknown microcodes, checking microcode health, Updated/Outdated status and more. MC Extractor can be also used as a research analysis tool with multiple structures which allow, among others, full parsing & information display of all documented or not microcode Headers. Moreover, with the help of its extensive database, MC Extractor is capable of uniquely categorizing all supported microcodes as well as check for any microcodes which have not been stored at the Microcode Repositories yet.A1. MC Extractor FeaturesSupports all current & legacy Microcodes from 1995 and onwardScans for all Intel, AMD, VIA & Freescale microcodes in one runVerifies all extracted microcode integrity via ChecksumsChecks if all Intel, AMD & VIA microcodes are Latest or OutdatedConverts Intel containers (dat,inc,txt,h) to binary imagesSearches on demand for all microcodes based on CPUIDShows microcode Header structures and details on demandIgnores most false positives based on sanity checksSupports known special, fixed or modded microcodesAbility to quickly add new microcode entries to the databaseAbility to detect Intel Production/Pre-Production Release tagAbility to analyze multiple files by drag & drop or by input pathAbility to ignore extracted duplicates based on name and contentsReports all microcodes which are not found at the Microcode RepositoriesFeatures command line parameters to enhance functionality & assist researchFeatures user friendly messages & proper handling of unexpected code errorsShows results in nice tables with colored text to signify emphasisOpen Source project licensed under GNU GPL v3, comment assisted codeA2. Microcode Repository DatabaseMC Extractor allows end-users and/or researchers to quickly extract, view, convert & report new microcode versions without the use of special tools or Hex Editors. To do that effectively, a database had to be built. The Intel, AMD & VIA CPU Microcode Repositories is a collection of every Intel, AMD & VIA CPU Microcodes we have found. Its existence is very important for MC Extractor as it allows us to continue doing research, find new types of microcode, compare releases for similarities, check for updated binaries etc. Bundled with MC Extractor is a file called MCE.db which is required for the program to run. It includes entries for all Microcode binaries that are available to us. This accommodates primarily two actions: a) Check whether the imported microcode is up to date and b) Help find new Microcode releases sooner by reporting them at the Intel, AMD & VIA CPU Microcode Repositories Discussion thread.A3. Sources and InspirationMC Extractor was initially based on a fraction of Lordkag’s UEFIStrip tool so, first and foremost, I thank him for all his work which inspired this project. Among others, great places to learn about microcodes are Intel’s own download site and official documentation, Intel Microcode Patch Authentication, Coreboot (a,b,c), Microparse by Dominic Chen, Ben Hawkes’s Notes and Research, Richard A Burton’s Microdecode, AIDA64 CPUID dumps, Sandpile CPUID, Free Electrons (a, b), Freescale and many more which I may have forgotten but would have been here otherwise.B. How to use MC ExtractorThere are two ways to use MC Extractor, MCE executable & Command Prompt. The MCE executable allows you to drag & drop one or more firmware and view them one by one or recursively scan entire directories. To manually call MC Extractor, a Command Prompt can be used with -skip as parameter.B1. MC Extractor ExecutableTo use MC Extractor, select one or multiple files and Drag & Drop them to its executable. You can also input certain optional parameters either by running MCE directly or by first dropping one or more files to it. Keep in mind that, due to operating system limitations, there is a limit on how many files can be dropped at once. If the latter is a problem, you can always use the -mass parameter to recursively scan entire directories as explained below.B2. MC Extractor ParametersThere are various parameters which enhance or modify the default behavior of MC Extractor:-? : Displays help & usage screen-skip : Skips welcome & options screen-exit : Skips Press enter to exit prompt-redir : Enables console redirection support-mass : Scans all files of a given directory-info : Displays microcode header(s)-add : Adds new input microcode to DB-dbname : Renames input file based on DB name-cont : Extracts Intel containers (dat,inc,h,txt)-search : Searches for microcodes based on CPUID-last : Shows Latest status based on user input-repo : Builds microcode repositories from inputB3. MC Extractor Error ControlDuring operation, MC Extractor may encounter issues that can trigger Notes, Warnings and/or Errors. Notes (yellow/green color) provide useful information about a characteristic of this particular firmware. Warnings (purple color) notify the user of possible problems that can cause system instability. Errors (red color) are shown when something unexpected or problematic is encountered.C. Download MC ExtractorMC Extractor consists of two files, the executable (MCE.exe or MCE) and the database (MCE.db). An already built/frozen/compiled binary is provided by me for Windows only (icon designed by Alfredo Hernandez). Thus, you don’t need to manually build/freeze/compile MC Extractor under Windows. Instead, download the latest version from the Releases tab, title should be “MC Extractor v1.X.X". You may need to scroll down a bit if there are DB releases at the top. The latter can be used to update the outdated DB which was bundled with the latest executable release, title should be "DB rXX". To extract the already built/frozen/compiled archive, you need to use programs which support RAR5 compression.C1. CompatibilityMC Extractor should work at all Windows, Linux or macOS operating systems which have Python 3.6 support. Windows users who plan to use the already built/frozen/compiled binaries must make sure that they have the latest Windows Updates installed which include all required "Universal C Runtime (CRT)" libraries.C2. Code PrerequisitesTo run MC Extractor’s python script, you need to have the following 3rd party Python modules installed:Coloramapip3 install coloramaPTablepip3 install https://github.com/platomav/PTable/archive/boxchar.zipC3. Build/Freeze/Compile with PyInstallerPyInstaller can build/freeze/compile MC Extractor at all three supported platforms, it is simple to run and gets updated often.Make sure Python 3.6.0 or newer is installed:python –versionUse pip to install PyInstaller:pip3 install pyinstallerUse pip to install colorama:pip3 install coloramaUse pip to install PTable:pip3 install https://github.com/platomav/PTable/archive/boxchar.zipBuild/Freeze/Compile MC Extractor:pyinstaller –noupx –onefile MCE.pyAt dist folder you should find the final MCE executableD. PicturesNote: Some pictures are outdated and depict older MC Extractor versions.Download MCExtractor

Link: http://feedproxy.google.com/~r/PentestTools/~3/UdW1gu5O6Ds/mcextractor-intel-amd-via-freescale.html

Trape v2.0 – People Tracker On The Internet: OSINT Analysis And Research Tool

Trape is a OSINT analysis and research tool, which allows people to track and execute intelligent social engineering attacks in real time. It was created with the aim of teaching the world how large Internet companies could obtain confidential information such as the status of sessions of their websites or services and control over their users through the browser, without them knowing, but It evolves with the aim of helping government organizations, companies and researchers to track the cybercriminals.At the beginning of the year 2018 was presented at BlackHat Arsenal in Singapore: https://www.blackhat.com/asia-18/arsenal.html#jose-pino and in multiple security events worldwide.Some benefitsLOCATOR OPTIMIZATION: Trace the path between you and the target you’re tracking. Each time you make a move, the path will be updated, by means of this the location of the target is obtained silently through a bypass made in the browsers, allowing you not to skip the location request permit on the victim’s side , objective or person and at the same time maintain a precision of 99% in the locator.APPROACH: When you’re close to the target, Trape will tell you.REST API: Generates an API (random or custom), and through this you can control and monitor other Web sites on the Internet remotely, getting the traffic of all visitors. PROCESS HOOKS: Manages social engineering attacks or processes in the target’s browser.— SEVERAL: You can issue a phishing attack of any domain or service in real time as well as send malicious files to compromise the device of a target.— INJECT JS: You keep the JavaScript code running free in real time, so you can manage the execution of a keylogger or your own custom functions in JS which will be reflected in the target’s browser.— SPEECH: A process of audio creation is maintained which is played in the browser of the objective, by means of this you can execute personalized messages in different voices with languages in Spanish and English. PUBLIC NETWORK TUNNEL: Trape has its own API that is linked to ngrok.com to allow the automatic management of public network tunnels; By this you can publish your content of trape server executed locally to the Internet, to manage hooks or public attacks.CLICK ATTACK TO GET CREDENTIALS: Automatically obtains the target credentials, recognizing your connection availability on a social network or Internet service. NETWORK: You can get information about the user’s network.— SPEED: Viewing the target’s network speed. (Ping, download, upload, type connection)— HOSTS OR DEVICES: Here you can get a scan of all the devices that are connected in the target network automatically. PROFILE: Brief summary of the target’s behavior and important additional information about your device.— GPU — ENERGY 30-session recognitionSession recognition is one of trape most interesting attractions, since you as a researcher can know remotely what service the target is connected to.USABILITY: You can delete logs and view alerts for each process or action you run against each target.How to use itFirst unload the tool.git clone https://github.com/jofpin/trape.gitcd trapepython trape.py -hIf it does not work, try to install all the libraries that are located in the file requirements.txtpip install -r requirements.txtExample of executionExample: python trape.py –url http://example.com –port 8080HELP AND OPTIONSuser:~$ python trape.py –helpusage: python trape.py -u <> -p <> [-h] [-v] [-u URL] [-p PORT] [-ak ACCESSKEY] [-l LOCAL] [–update] [-n] [-ic INJC]optional arguments: -h, –help show this help message and exit -v, –version show program’s version number and exit -u URL, –url URL Put the web page url to clone -p PORT, –port PORT Insert your port -ak ACCESSKEY, –accesskey ACCESSKEY Insert your custom key access -l LOCAL, –local LOCAL Insert your home file -n, –ngrok Insert your ngrok Authtoken -ic INJC, –injectcode INJC Insert your custom REST API path -ud UPDATE, –update UPDATE Update trape to the latest version–url In this option you add the URL you use to clone Live, which works as a decoy.–port Here you insert the port, where you are going to run the trape server.–accesskey You enter a custom key for the trape panel, if you do not insert it will generate an automatic key.–injectcode trape contains a REST API to play anywhere, using this option you can customize the name of the file to include, if it does not, generates a random name allusive to a token.–local Using this option you can call a local HTML file, this is the replacement of the –url option made to run a local lure in trape.–ngrok In this option you can enter a token, to run at the time of a process. This would replace the token saved in configurations.–version You can see the version number of trape.–update Option especially to upgrade to the latest version of trape.–help It is used to see all the above options, from the executable.DisclaimerThis tool has been published educational purposes in order to teach people how bad guys could track them or monitor them or obtain information from their credentials, we are not responsible for the use or the scope that may have the People through this project.We are totally convinced that if we teach how vulnerable things are, we can make the Internet a safer place.DeveloperThis development and others, the participants will be mentioned with name, Twitter and charge. CREATOR— Jose Pino – @jofpin – (Security Researcher)Download Trape v2.0

Link: http://www.kitploit.com/2018/11/trape-v20-people-tracker-on-internet.html

Sn1per v6.0 – Automated Pentest Framework For Offensive Security Experts

Sn1per Community Edition is an automated scanner that can be used during a penetration test to enumerate and scan for vulnerabilities. Sn1per Professional is Xero Security’s premium reporting addon for Professional Penetration Testers, Bug Bounty Researchers and Corporate Security teams to manage large environments and pentest scopes.SN1PER PROFESSIONAL FEATURES:Professional reporting interfaceSlideshow for all gathered screenshotsSearchable and sortable DNS, IP and open port databaseCategorized host reportsQuick links to online recon tools and Google hacking queriesPersonalized notes field for each hostDEMO VIDEO:SN1PER COMMUNITY FEATURES: Automatically collects basic recon (ie. whois, ping, DNS, etc.) Automatically launches Google hacking queries against a target domain Automatically enumerates open ports via NMap port scanning Automatically brute forces sub-domains, gathers DNS info and checks for zone transfers Automatically checks for sub-domain hijacking Automatically runs targeted NMap scripts against open ports Automatically runs targeted Metasploit scan and exploit modules Automatically scans all web applications for common vulnerabilities Automatically brute forces ALL open services Automatically test for anonymous FTP access Automatically runs WPScan, Arachni and Nikto for all web services Automatically enumerates NFS shares Automatically test for anonymous LDAP access Automatically enumerate SSL/TLS ciphers, protocols and vulnerabilities Automatically enumerate SNMP community strings, services and users Automatically list SMB users and shares, check for NULL sessions and exploit MS08-067 Automatically exploit vulnerable JBoss, Java RMI and Tomcat servers Automatically tests for open X11 servers Auto-pwn added for Metasploitable, ShellShock, MS08-067, Default Tomcat Creds Performs high level enumeration of multiple hosts and subnets Automatically integrates with Metasploit Pro, MSFConsole and Zenmap for reporting Automatically gathers screenshots of all web sites Create individual workspaces to store all scan outputAUTO-PWN:Drupal Drupalgedon2 RCE CVE-2018-7600GPON Router RCE CVE-2018-10561Apache Struts 2 RCE CVE-2017-5638Apache Struts 2 RCE CVE-2017-9805Apache Jakarta RCE CVE-2017-5638Shellshock GNU Bash RCE CVE-2014-6271HeartBleed OpenSSL Detection CVE-2014-0160Default Apache Tomcat Creds CVE-2009-3843MS Windows SMB RCE MS08-067Webmin File Disclosure CVE-2006-3392Anonymous FTP AccessPHPMyAdmin Backdoor RCEPHPMyAdmin Auth BypassJBoss Java De-Serialization RCE’sKALI LINUX INSTALL:./install.shDOCKER INSTALL:Credits: @menzowDocker Install: https://github.com/menzow/sn1per-dockerDocker Build: https://hub.docker.com/r/menzo/sn1per-docker/builds/bqez3h7hwfun4odgd2axvn4/Example usage:$ docker pull menzo/sn1per-docker$ docker run –rm -ti menzo/sn1per-docker sniper menzo.ioUSAGE:[*] NORMAL MODEsniper -t|–target [*] NORMAL MODE + OSINT + RECONsniper -t|–target <TARGET> -o|–osint -re|–recon[*] STEALTH MODE + OSINT + RECONsniper -t|–target <TARGET> -m|–mode stealth -o|–osint -re|–recon[*] DISCOVER MODEsniper -t|–target <CIDR> -m|–mode discover -w|–workspace <WORSPACE_ALIAS>[*] SCAN ONLY SPECIFIC PORTsniper -t|–target <TARGET> -m port -p|–port <portnum>[*] FULLPORTONLY SCAN MODEsniper -t|–target <TARGET> -fp|–fullportonly[*] PORT SCAN MODEsniper -t|–target <TARGET> -m|–mode port -p|–port <PORT_NUM>[*] WEB MODE – PORT 80 + 443 ONLY!sniper -t|–target <TARGET> -m|–mode web[*] HTTP WEB PORT MODEsniper -t|–target <TARGET> -m|–mode webporthttp -p|–port <port>[*] HTTPS WEB PORT MODEsniper -t|–target <TARGET> -m|–mode webporthttps -p|–port <port>[*] ENABLE BRUTEFORCEsniper -t|–target <TARGET> -b|–bruteforce[*] AIRSTRIKE MODEsniper -f|–file /full/path/to/targets.txt -m|–mode airstrike[*] NUKE MODE WITH TARGET LIST, BRUTEFORCE ENABLED, FULLPORTSCAN ENABLED, OSINT ENABLED, RECON ENABLED, WORKSPACE & LOOT ENABLEDsniper -f–file /full/path/to/targets.txt -m|–mode nuke -w|–workspace <WORKSPACE_ALIAS>[*] ENABLE LOOT IMPORTING INTO METASPLOITsniper -t|–target <TARGET>[*] LOOT REIMPORT FUNCTIONsniper -w <WORKSPACE_ALIAS> –reimport[*] UPDATE SNIPERsniper -u|–updateMODES:NORMAL: Performs basic scan of targets and open ports using both active and passive checks for optimal performance.STEALTH: Quickly enumerate single targets using mostly non-intrusive scans to avoid WAF/IPS blocking.AIRSTRIKE: Quickly enumerates open ports/services on multiple hosts and performs basic fingerprinting. To use, specify the full location of the file which contains all hosts, IPs that need to be scanned and run ./sn1per /full/path/to/targets.txt airstrike to begin scanning.NUKE: Launch full audit of multiple hosts specified in text file of choice. Usage example: ./sniper /pentest/loot/targets.txt nuke.DISCOVER: Parses all hosts on a subnet/CIDR (ie. 192.168.0.0/16) and initiates a sniper scan against each host. Useful for internal network scans.PORT: Scans a specific port for vulnerabilities. Reporting is not currently available in this mode.FULLPORTONLY: Performs a full detailed port scan and saves results to XML.WEB: Adds full automatic web application scans to the results (port 80/tcp & 443/tcp only). Ideal for web applications but may increase scan time significantly.WEBPORTHTTP: Launches a full HTTP web application scan against a specific host and port.WEBPORTHTTPS: Launches a full HTTPS web application scan against a specific host and port.UPDATE: Checks for updates and upgrades all components used by sniper.REIMPORT: Reimport all workspace files into Metasploit and reproduce all reports.RELOAD: Reload the master workspace report.SAMPLE REPORT:https://gist.github.com/1N3/8214ec2da2c91691bcbcDownload Sn1per v5.0

Link: http://feedproxy.google.com/~r/PentestTools/~3/RLWB_3_Wk9M/sn1per-v60-automated-pentest-framework.html

CMS Scanner – Scan WordPress, Drupal, Joomla, vBulletin Websites For Security Issues

Scan WordPress, Drupal, Joomla, vBulletin websites for Security issues.CMSScan provides a centralized Security Dashboard for CMS Security scans. It is powered by wpscan, droopescan, vbscan and joomscan. It supports both on demand and scheduled scans and has the ability to sent email reports.Install# Requires ruby, ruby-dev, gem, python3 and gitgit clone https://github.com/ajinabraham/CMSScan.gitcd CMSScan./setup.shRun./run.shPeriodic ScansYou can perform periodic CMS scans with CMSScan. You must run CMSScan server separately and configure the following before running the scheduler.py script.# SMTP SETTINGSSMTP_SERVER = ”FROM_EMAIL = ”TO_EMAIL = ”# SERVER SETTINGSSERVER = ”# SCAN SITESWORDPRESS_SITES = []DRUPAL_SITES = []JOOMLA_SITES = []VBULLETIN_SITES = []Add a cronjobcrontab -e@weekly /usr/bin/python3 scheduler.pyDockerLocaldocker build -t cmsscan .docker run -it -p 7070:7070 cmsscanPrebuilt Imagedocker pull opensecurity/cmsscandocker run -it -p 7070:7070 opensecurity/cmsscanScreenshotsDownload CMSScan

Link: http://feedproxy.google.com/~r/PentestTools/~3/w0AREgkhNJQ/cms-scanner-scan-wordpress-drupal.html