Celerystalk – An Asynchronous Enumeration and Vulnerability Scanner

celerystalk helps you automate your network scanning/enumeration process with asynchronous jobs (aka tasks) while retaining full control of which tools you want to run.Configurable – Some common tools are in the default config, but you can add any tool you wantService Aware – Uses nmap/nessus service names rather than port numbers to decide which tools to runScalable – Designed for scanning multiple hosts, but works well for scanning one host at a timeVirtualHosts – Supports subdomain recon and virtualhost scanningJob Control – Supports canceling, pausing, and resuming of tasks, inspired by Burp scannerScreenshots Automatically takes screenshots of every url identified via brute force (gobuster) and spidering (Photon)Install/SetupSupported Operating Systems: KaliSupported Python Version: 2.xYou must install and run celerystalk as root# git clone https://github.com/sethsec/celerystalk.git# cd celerystalk/setup# ./install.sh# cd ..# ./celerystalk -hYou must install and run celerystalk as rootUsing celerystalk – The basics[CTF/HackTheBox mode] – How to scan a host by IP# nmap -Pn -p- -sV -oX tenten.xml # Run nmap# ./celerystalk workspace create -o /htb # Create default workspace and set output dir# ./celerystalk import -f tenten.xml # Import scan # ./celerystalk db services # If you want to see what services were loaded# ./celerystalk scan # Run all enabled commands# ./celerystalk query watch (then Ctrl+c) # Watch scans as move from pending > running > complete# ./celerystalk report # Generate report# firefox /htb/celerystalkReports/Workspace-Report[Default.html] & # View report [Vulnerability Assessment Mode] – How to scan a list of in-scope hosts/networks and any subdomains that resolve to any of the in-scope IPs# nmap -iL client-inscope-list.txt -Pn -p- -sV -oX client.xml # Run nmap# ./celerystalk workspace create -o /assessments/client # Create default workspace and set output dir# ./celerystalk import -f client.xml -S scope.txt # Import scan and scope files# ./celerystalk subdomains -d client.com,client.net # Find subdomains and determine if in scope# ./celerystalk scan # Run all enabled commands# ./celerystalk query watch (then Ctrl+c) # Wait for scans to finish# ./celerystalk report # Generate report# firefox /celerystalkReports/Workspace-Report[Default].html &# View report [URL Mode] – How to scan a a URL (Use this mode to scan sub-directories found during first wave of scans).# ./celerystalk workspace create -o /assessments/client # Create default workspace and set output dir# ./celerystalk scan -u # Run all enabled commands# ./celerystalk query watch (then Ctrl+c) # Wait for scans to finish# ./celerystalk report # Generate report# firefox <path>/celerystalkReports/Workspace-Report[Default].html &# View report Using celerystalk – Some more detail Configure which tools you’d like celerystalk to execute: The install script drops a config.ini file in the celerystalk folder. The config.ini script is broken up into three sections: Service Mapping – The first section normalizes Nmap & Nessus service names for celerystalk (this idea was created by @codingo_ in Reconnoitre AFAIK). [nmap-service-names]http = http,http-alt,http-proxy,www,http?https = ssl/http,https,ssl/http-alt,ssl/http?ftp = ftp,ftp?mysql = mysqldns = dns,domain,domainDomain Recon Tools – The second section defines the tools you’d like to use for subdomain discovery (an optional feature): [domain-recon]amass : /opt/amass/amass -d [DOMAIN]sublist3r : python /opt/Sublist3r/sublist3r.py -d [DOMAIN]Service Configuration – The rest of the confi.ini sections define which commands you want celerystalk to run for each identified service (i.e., http, https, ssh). Disable any command by commenting it out with a ; or a #.Add your own commands using [TARGET],[PORT], and [OUTPUT] placeholders.Here is an example: [http]whatweb : whatweb http://[TARGET]:[PORT] -a3 –colour=never > [OUTPUT].txtcewl : cewl http://[TARGET]:[PORT]/ -m 6 -w [OUTPUT].txtcurl_robots : curl http://[TARGET]:[PORT]/robots.txt –user-agent ‘Googlebot/2.1 (+http://www.google.com/bot.html)’ –connect-timeout 30 –max-time 180 > [OUTPUT].txtnmap_http_vuln : nmap -sC -sV -Pn -v -p [PORT] –script=http-vuln* [TARGET] -d -oN [OUTPUT].txt -oX [OUTPUT].xml –host-timeout 120m –script-timeout 20mnikto : nikto -h http://[TARGET] -p [PORT] &> [OUTPUT].txtgobuster-common : gobuster -u http://[TARGET]:[PORT]/ -k -w /usr/share/seclists/Discovery/Web-Content/common.txt -s ‘200,204,301,302,307,403,500’ -e -n -q > [OUTPUT].txtphoton : python /opt/Photon/photon.py -u http://[TARGET]:[PORT] -o [OUTPUT];gobuster_2.3-medium : gobuster -u http://[TARGET]:[PORT]/ -k -w /usr/share/wordlists/dirbuster/directory-list-lowercase-2.3-medium.txt -s ‘200,204,301,307,403,500’ -e -n -q > [OUTPUT].txt Run Nmap or Nessus: Nmap: Run nmap against your target(s). Required: enable version detection (-sV) and output to XML (-oX filename.xml). All other nmap options are up to you. Here are some examples: nmap target(s) -Pn -p- -sV -oX filename.xml nmap -iL target_list.txt -Pn -sV -oX filename.xmlNessus: Run nessus against your target(s) and export results as a .nessus file Create worksapce: Option Description no options Prints current workspace create Creates new workspace -w Define new workspace name -o Define output directory assigned to workspace Create default workspace ./celerystalk workspace create -o /assessments/client Create named workspace ./celerystalk workspace create -o /assessments/client -w client Switch to another worksapce ./celerystalk workspace client Import Data: Import data into celerystalk Option Description -f scan.xml Nmap/Nessus xmlAdds all IP addresses from this file to hosts table and marks them all in scope to be scanned.Adds all ports and service types to services table. -S scope.txt Scope fileShow file differences that haven’t been staged -D subdomains.txt (sub)Domains filecelerystalk determines whether each subdomain is in scope by resolving the IP and looking for IP in the DB. If there is a match, the domain is marked as in scope and will be scanned. Import Nmap XML file: ./celerystalk import -f /assessments/nmap.xml Import Nessus file: ./celerystalk import -f /assessments/scan.nessus Import list of Domains: ./celerystalk import -D <file>Import list of IPs/Ranges: ./celerystalk import -S <file>Specify workspace: ./celerystalk import -f <file> Import multiple files: ./celerystalk import -f nmap.xml -S scope.txt -D domains.txt Find Subdomains (Optional): celerystalk will perform subdomain recon using the tools specified in the config.ini. Option Description -d domain1,domain2,etc Run Amass, Sublist3r, etc. and store domains in DBAfter running your subdomain recon tools celerystalk determines whether each subdomain is in scope by resolving the IP and looking for IP in the DB. If there is a match, the domain is marked as in scope and will be scanned. Find subdomains: celerystalk subdomains -d domain1.com,domain2.com Launch Scan: I recommend using the import command first and running scan with no options, however you do have the option to do it all at once (import and scan) by using the flags below. celerystalk will submit tasks to celery which asynchronously executes them and logs output to your output directory. Option Description no options Scan all in scope hostsReads DB and scans every in scope IP and subdomain.Launches all enabled tools for IPs, but only http/http specific tools against virtualhosts -t ip,vhost,cidr Scan specific target(s) from DB or scan fileScan a subset of the in scope IPs and/or subdomains. -s SimulationSends all of the tasks to celery, but all commands are executed with a # before them rendering them inert. Use these only if you want to skip the import phase and import/scan all at once -f scan.xml Import and process Nmap/Nessus xml before scanAdds all IP addresses from this file to hosts table and marks them all in scope to be scanned.Adds all ports and service types to services table. -S scope.txt Import and process scope file before scanShow file differences that haven’t been staged. -D subdomains.txt Import and process (sub)domains file before scan celerystalk determines whether each subdomain is in scope by resolving the IP and looking for IP in the DB. If there is a match, the domain is marked as in scope and will be scanned. -d domain1,domain2,etc Find Subdomains and scan in scope hostsAfter running your subdomain recon tools celerystalk determines whether each subdomain is in scope by resolving the IP and looking for IP in the DB. If there is a match, the domain is marked as in scope and will be scanned. Scan imported hosts/subdomains Scan all in scope hosts: ./celerystalk scan Scan subset of DB hosts: ./celerystalk scan -t, ./celerystalk scan -t ./celerystalk scan -t ./celerystalk scan -t sub.domain.comSimulation mode: ./celerystalk scan -sImport and Scan Start from Nmap XML file: ./celerystalk scan -f /pentest/nmap.xml -o /pentestStart from Nessus file: ./celerystalk scan -f /pentest/scan.nessus -o /pentestScan all in scope vhosts: ./celerystalk scan -f <file> -o /pentest -d domain1.com,domain2.comScan subset hosts in XML: ./celerystalk scan -f <file> -o /pentest -t, ./celerystalk scan -f <file> -o /pentest -t ./celerystalk scan -f <file> -o /pentest -t mode: ./celerystalk scan -f <file> -o /pentest -s Rescan: Use this command to rescan an already scanned host. Option Description no option For each in scope host in the DB, celerystalk will ask if if you want to rescan it -t ip,vhost,cidr Scan a subset of the in scope IPs and/or subdomains. Rescan all hosts: ./celerystalk rescanRescan some hosts ./celerystalk rescan-t,sub.domain.com Simulation mode: ./celerystalk rescan -s Query Status: Asynchronously check the status of the tasks queue as frequently as you like. The watch mode actually executes the linux watch command so you don’t fill up your entire terminal buffer. Option Description no options Shows all tasks in the defualt workspace watch Sends command to the unix watch command which will let you get an updated status every 2 seconds brief Limit of 5 results per status (pending/running/completed/cancelled/paused) summary Shows only a banner with numbers and not the tasks themselves Query Tasks: ./celerystalk query ./celerystalk query watch ./celerystalk query brief ./celerystalk query summary ./celerystalk query summary watch Cancel/Pause/Resume Tasks: Cancel/Pause/Resume any task(s) that are currently running or in the queue. Option Description cancel Canceling a running task will send a kill -TERMCanceling a queued task* will make celery ignore it (uses celery’s revoke).Canceling all tasks* will kill running tasks and revoke all queued tasks. pause Pausing a single task uses kill -STOP to suspend the process.Pausing all tasks* attemps to kill -STOP all running tasks, but it is a little wonky and you mind need to run it a few times. It is possible a job completed before it was able to be paused, which means you will have a worker that is still accepting new jobs. resume Resuming tasks* sends a kill -CONT which allows the process to start up again where it left off. Cancel/Pause/Resume Tasks: ./celerystalk <verb> 5,6,10-20 #Cancel/Pause/Resume tasks 5, 6, and 10-20 from current workspace ./celerystalk <verb> all #Cancel/Pause/Resume all tasks from current workspaces Run Report: Run a report which combines all of the tool output into an html file and a txt file. Run this as often as you like. Each time you run the report it overwrites the previous report. Create Report: ./celerystalk report #Create a report for all scanneed hosts in current workspaceScreenshot: Access the DB: List the workspaces, hosts, services, or paths stored in the celerystalk database Option Description workspaces Show all known workspaces and the output directory associated with each workspace services Show all known open ports and service types by IP hosts Show all hosts (IP addresses and subdomains/vhosts) and whether they are in scope and whether they have been submitted for scanning paths Show all paths that have been identified by vhost -w workspace Specify a non-default workspace Show workspaces: ./celeryststalk db workspacesShow services: ./celeryststalk db services Show hosts: ./celeryststalk db hostsShow paths: ./celeryststalk db paths Export DB: Export each table of the DB to a csv file Option Description no options Export the services, hosts, and paths table from the default database -w workspace Specify a non-default workspace Export current DB: ./celerystalk db exportExport another DB: ./celerystalk db export -w testUsageUsage: celerystalk workspace create -o <output_dir> [-w workspace_name] celerystalk workspace [<workspace_name>] celerystalk import [-f <nmap_file>] [-S scope_file] [-D subdomains_file] [-u <url>] celerystalk subdomains -d <domains> [-s] celerystalk scan [-f <nmap_file>] [-t <targets>] [-d <domains>] [-S scope_file] [-D subdomains_file] [-s] celerystalk scan -u <url> [-s] celerystalk rescan [-t <targets>] [-s] celerystalk query ([full] | [summary] | [brief]) [watch] celerystalk query [watch] ([full] | [summary] | [brief]) celerystalk report celerystalk cancel ([all]|[<task_ids>]) celerystalk pause ([all]|[<task_ids>]) celerystalk resume ([all]|[<task_ids>]) celerystalk db ([workspaces] | [services] | [hosts] | [vhosts] | [paths]) celerystalk db export celerystalk shutdown celerystalk interactive celerystalk (help | -h | –help)Options: -h –help Show this screen -v –version Show version -f <nmap_file> Nmap xml import file -o <output_dir> Output directory -S <scope_file> Scope import file -D <subdomains_file> Subdomains import file -t <targets> Target(s): IP, IP Range, CIDR -u <url> URL to parse and scan with all configured tools -w <workspace> Workspace -d –domains Domains to scan for vhosts -s –simulation Simulation mode. Submit tasks comment out all commandsExamples: Workspace Create default workspace celerystalk workspace create -o /assessments/client Create named workspace celerystalk workspace create -o /assessments/client -w client Switch to another worksapce celerystalk workspace client2 Import Import Nmap XML file: celerystalk import -f /assessments/nmap.xml Import Nessus file: celerystalk import -f /assessments/scan.nessus Import list of Domains: celerystalk import -D <file> Import list of IPs/Ranges: celerystalk import -S <file> Import multiple files: celerystalk import -f nmap.xml -S scope.txt -D domains.txt Subdomain Recon Find subdomains: celerystalk subdomains -d domain1.com,domain2.com Scan Scan all in scope hosts: celerystalk scan Scan subset of DB hosts: celerystalk scan -t, celerystalk scan -t celerystalk scan -t celerystalk scan -t sub.domain.com Simulation mode: celerystalk scan -s Import and Scan Start from Nmap XML file: celerystalk scan -f /pentest/nmap.xml Start from Nessus file: celerystalk scan -f /pentest/scan.nessus Scan subset hosts in XML: celerystalk scan -f <file> -t, celerystalk scan -f <file> -t celerystalk scan -f <file> -t celerystalk scan -f <file> -t sub.domain.com Simulation mode: celerystalk scan -f <file> -s Rescan Rescan all hosts: celerystalk rescan Rescan some hosts celerystalk rescan-t,sub.domain.com Simulation mode: celerystalk rescan -s Query Mode All tasks: celerystalk query Update status every 2s: celerystalk query watch Show only 5 tasks per mode: celerystalk query brief Show stats only celerystalk query summary Show stats every 2s: celerystalk query summary watch Job Control (cancel/pause/resume) Specific tasks: celerystalk cancel 5,6,10-20 celerystalk pause 5,6,10-20 celerystalk resume 5,6,10-20 All tasks current worspace: celerystalk cancel all celerystalk pause all celerystalk resume all Access the DB Show workspaces: celeryststalk db workspaces Show services: celeryststalk db services Show hosts: celeryststalk db hosts Show vhosts only celeryststalk db vhosts Show paths: celeryststalk db paths Export DB Export current DB: celerystalk db exportCreditThis project was inspired by many great tools:https://github.com/codingo/Reconnoitre by @codingo_https://github.com/frizb/Vanquish by @frizbhttps://github.com/leebaird/discover by @discoverscriptshttps://github.com/1N3/Sn1perhttps://github.com/SrFlipFlop/Network-Security-Analysis by @SrFlipFlopThanks to @offensivesecurity and @hackthebox_eu for their lab networksAlso, thanks to:@decidedlygray for pointing me towards celery, helping me solve python problems that were over my head, and for the extensive beta testing@kerpanic for inspiring me to dust off an old project and turn it into celerystalkMy TUV OpenSky team and my IthacaSec hackers for testing this out and submitting bugs and featuresDownload Celerystalk

Link: http://feedproxy.google.com/~r/PentestTools/~3/9zxM11uFyz8/celerystalk-asynchronous-enumeration.html

Dawnscanner – Dawn Is A Static Analysis Security Scanner For Ruby Written Web Applications (Sinatra, Padrino And ROR Frameworks)

dawnscanner is a source code scanner designed to review your ruby code for security issues.dawnscanner is able to scan plain ruby scripts (e.g. command line applications) but all its features are unleashed when dealing with web applications source code. dawnscanner is able to scan major MVC (Model View Controller) frameworks, out of the box:Ruby on RailsSinatraPadrinoQuick update from November, 2018As you can see dawnscanner is on hold since more then an year. Sorry for that. It’s life. I was overwhelmed by tons of stuff and I dedicated free time to Offensive Security certifications. True to be told, I’m starting OSCE journey really soon.The dawnscanner project will be updated soon with new security checks and kickstarted again.Paolodawnscanner version 1.6.6 has 235 security checks loaded in its knowledge base. Most of them are CVE bulletins applying to gems or the ruby interpreter itself. There are also some check coming from Owasp Ruby on Rails cheatsheet.An overall introductionWhen you run dawnscanner on your code it parses your project Gemfile.lock looking for the gems used and it tries to detect the ruby interpreter version you are using or you declared in your ruby version management tool you like most (RVM, rbenv, …).Then the tool tries to detect the MVC framework your web application uses and it applies the security check accordingly. There checks designed to match rails application or checks that are appliable to any ruby code.dawnscanner can also understand the code in your views and to backtrack sinks to spot cross site scripting and sql injections introduced by the code you actually wrote. In the project roadmap this is the code most of the future development effort will be focused on.dawnscanner security scan result is a list of vulnerabilities with some mitigation actions you want to follow in order to build a stronger web application.InstallationYou can install latest dawnscanner version, fetching it from Rubygems by typing:$ gem install dawnscanner If you want to add dawn to your project Gemfile, you must add the following:group :development do gem ‘dawnscanner’, :require=>falseendAnd then upgrade your bundle$ bundle installYou may want to build it from source, so you have to check it out from github first:$ git clone https://github.com/thesp0nge/dawnscanner.git$ cd dawnscanner$ bundle install$ rake installAnd the dawnscanner gem will be built in a pkg directory and then installed on your system. Please note that you have to manage dependencies on your own this way. It makes sense only if you want to hack the code or something like that.UsageYou can start your code review with dawnscanner very easily. Simply tell the tool where the project root directory.Underlying MVC framework is autodetected by dawnscanner using target Gemfile.lock file. If autodetect fails for some reason, the tool will complain about it and you have to specify if it’s a rails, sinatra or padrino web application by hand.Basic usage is to specify some optional command line option to fit best your needs, and to specify the target directory where your code is stored.$ dawn [options] targetIn case of need, there is a quick command line option reference running dawn -h at your OS prompt.$ dawn -hUsage: dawn [options] target_directoryExamples: $ dawn a_sinatra_webapp_directory $ dawn -C the_rails_blog_engine $ dawn -C –json a_sinatra_webapp_directory $ dawn –ascii-tabular-report my_rails_blog_ecommerce $ dawn –html -F my_report.html my_rails_blog_ecommerce -G, –gem-lock force dawn to scan only for vulnerabilities affecting dependencies in Gemfile.lock (DEPRECATED) -d, –dependencies force dawn to scan only for vulnerabilities affecting dependencies in Gemfile.lockReporting -a, –ascii-tabular-report cause dawn to format findings using tables in ascii art (DEPRECATED) -j, –json cause dawn to format findings using json -K, –console cause dawn to format findings using plain ascii text -C, –count-only dawn will only count vulnerabilities (useful for scripts) -z, –exit-on-warn dawn will return number of found vulnerabilities as exit code -F, –file filename tells dawn to write output to filename -c, –config-file filename tells dawn to load configuration from filenameDisable security check family –disable-cve-bulletins disable all CVE security checks –disable-code-quality disable all code quality checks –disable-code-style disable all code style checks –disable-owasp-ror-cheatsheet disable all Owasp Ruby on Rails cheatsheet checks –disable-owasp-top-10 disable all Owasp Top 10 checksFlags useful to query Dawn -S, –search-knowledge-base [check_name] search check_name in the knowledge base –list-knowledge-base list knowledge-base content –list-known-families list security check families contained in dawn’s knowledge base –list-known-framework list ruby MVC frameworks supported by dawn –list-scan-registry list past scan informations stored in scan registry Service flags -D, –debug enters dawn debug mode -V, –verbose the output will be more verbose -v, –version show version information -h, –help show this helpRake taskTo include dawnscanner in your rake task list, you simply have to put this line in your Rakefilerequire ‘dawn/tasks’Then executing $ rake -T you will have a dawn:run task you want to execute.$ rake -T…rake dawn:run # Execute dawnscanner on the current directory…Interacting with the knowledge baseYou can dump all security checks in the knowledge base this way$ dawn –list-knowledge-baseUseful in scripts, you can use –search-knowledge-base or -S with as parameter the check name you want to see if it’s implemented as a security control or not.$ dawn -S CVE-2013-642107:59:30 [*] dawn v1.1.0 is starting upCVE-2013-6421 found in knowledgebase.$ dawn -S this_test_does_not_exist08:02:17 [*] dawn v1.1.0 is starting upthis_test_does_not_exist not found in knowledgebasedawnscanner security scan in actionAs output, dawnscanner will put all security checks that are failed during the scan.This the result of Codedake::dawnscanner running against a Sinatra 1.4.2 web application wrote for a talk I delivered in 2013 at Railsberry conference.As you may see, dawnscanner first detects MVC running the application by looking at Gemfile.lock, than it discards all security checks not appliable to Sinatra (49 security checks, in version 1.0, especially designed for Ruby on Rails) and it applies them.$ dawn ~/src/hacking/railsberry201318:40:27 [*] dawn v1.1.0 is starting up18:40:27 [$] dawn: scanning /Users/thesp0nge/src/hacking/railsberry201318:40:27 [$] dawn: sinatra v1.4.2 detected18:40:27 [$] dawn: applying all security checks18:40:27 [$] dawn: 109 security checks applied – 0 security checks skipped18:40:27 [$] dawn: 1 vulnerabilities found18:40:27 [!] dawn: CVE-2013-1800 check failed18:40:27 [$] dawn: Severity: high18:40:27 [$] dawn: Priority: unknown18:40:27 [$] dawn: Description: The crack gem 0.3.1 and earlier for Ruby does not properly restrict casts of string values, which might allow remote attackers to conduct object-injection attacks and execute arbitrary code, or cause a denial of service (memory and CPU consumption) by leveraging Action Pack support for (1) YAML type conversion or (2) Symbol type conversion, a similar vulnerability to CVE-2013-0156.18:40:27 [$] dawn: Solution: Please use crack gem version 0.3.2 or above. Correct your gemfile18:40:27 [$] dawn: Evidence:18:40:27 [$] dawn: Vulnerable crack gem version found: 0.3.118:40:27 [*] dawn is leavingWhen you run dawnscanner on a web application with up to date dependencies, it’s likely to return a friendly no vulnerabilities found message. Keep it up working that way!This is dawnscanner running against a Padrino web application I wrote for a scorecard quiz game about application security. Italian language only. Sorry.18:42:39 [*] dawn v1.1.0 is starting up18:42:39 [$] dawn: scanning /Users/thesp0nge/src/CORE_PROJECTS/scorecard18:42:39 [$] dawn: padrino v0.11.2 detected18:42:39 [$] dawn: applying all security checks18:42:39 [$] dawn: 109 security checks applied – 0 security checks skipped18:42:39 [*] dawn: no vulnerabilities found.18:42:39 [*] dawn is leavingIf you need a fancy HTML report about your scan, just ask it to dawnscanner with the –html flag used with the –file since I wanto to save the HTML to disk.$ dawn /Users/thesp0nge/src/hacking/rt_first_app –html –file report.html09:00:54 [*] dawn v1.1.0 is starting up09:00:54 [*] dawn: report.html created (2952 bytes)09:00:54 [*] dawn is leavingUseful linksProject homepage: http://dawnscanner.orgTwitter profile: @dawnscannerGithub repository: https://github.com/thesp0nge/dawnscannerMailing list: https://groups.google.com/forum/#!forum/dawnscannerThanks tosaten: first issue posted about a typo in the READMEpresidentbeef: for his outstanding work that inspired me creating dawn and for double check comparison matrix. Issue #2 is yours :)marinerJB: for misc bug reports and further ideasMatteo: for ideas on API and their usage with github.com hooksDownload Dawnscanner

Link: http://feedproxy.google.com/~r/PentestTools/~3/gox5JYdlGTc/dawnscanner-dawn-is-static-analysis.html

DevAudit – Open-source, Cross-Platform, Multi-Purpose Security Auditing Tool

DevAudit is an open-source, cross-platform, multi-purpose security auditing tool targeted at developers and teams adopting DevOps and DevSecOps that detects security vulnerabilities at multiple levels of the solution stack. DevAudit provides a wide array of auditing capabilities that automate security practices and implementation of security auditing in the software development life-cycle. DevAudit can scan your operating system and application package dependencies, application and application server configurations, and application code, for potential vulnerabilities based on data aggregated by providers like OSS Index and Vulners from a wide array of sources and data feeds such as the National Vulnerability Database (NVD) CVE data feed, the Debian Security Advisories data feed, Drupal Security Advisories, and many others.DevAudit helps developers address at least 4 of the OWASP Top 10 risks to web application development:A9 Using Components with Known VulnerabilitiesA5 Security MisconfigurationA6 Sensitive Data ExposureA2 Broken Authentication and Session Managementas well as risks classified by MITRE in the CWE dictionary such as CWE-2 Environment and CWE-200 Information Disclosure As development progresses and its capabilities mature, DevAudit will be able to address the other risks on the OWASP Top 10 and CWE lists like Injection and XSS. With the focus on web and cloud and distributed multi-user applications, software development today is increasingly a complex affair with security issues and potential vulnerabilities arising at all levels of the stack developers rely on to deliver applications. The goal of DevAudit is to provide a platform for automating implementation of development security reviews and best practices at all levels of the solution stack from library package dependencies to application and server configuration to source code.Features Cross-platform with a Docker image also available. DevAudit runs on Windows and Linux with *BSD and Mac and ARM Linux support planned. Only an up-to-date version of .NET or Mono is required to run DevAudit. A DevAudit Docker image can also be pulled from Docker Hub and run without the need to install Mono. CLI interface. DevAudit has a CLI interface with an option for non-interactive output and can be easily integrated into CI build pipelines or as post-build command-line tasks in developer IDEs. Work on integration of the core audit library into IDE GUIs has already begun with the Audit.Net Visual Studio extension. Continuously updated vulnerabilties data. DevAudit uses backend data providers like OSS Index and Vulners which provide continuously updated vulnerabilities data compiled from a wide range of security data feeds and sources such as the NVD CVE feeds, Drupal Security Advisories, and so on. Support for additional vulnerability and package data providers like vFeed and Libraries.io will be added. Audit operating system and development package dependencies. DevAudit audits Windows applications and packages installed via Windows MSI, Chocolatey, and OneGet, as well as Debian, Ubuntu, and CentOS Linux packages installed via Dpkg, RPM and YUM, for vulnerabilities reported for specific versions of the applications and packages. For development package dependencies and libraries DevAudit audits NuGet v2 dependencies for .NET, Yarn/NPM and Bower dependencies for nodejs, and Composer package dependencies for PHP. Support for other package managers for different languages is added regularly. Audit application server configurations. DevAudit audits the server version and the server configuration for the OpenSSH sshd, Apache httpd, MySQL/MariaDB, PostgreSQL, and Nginx servers with many more coming. Configuration auditing is based on the Alpheus library and is done using full syntactic analysis of the server configuration files. Server configuration rules are stored in YAML text files and can be customized to the needs of developers. Support for many more servers and applications and types of analysis like database auditing is added regularly. Audit application configurations. DevAudit audits Microsoft ASP.NET applications and detects vulnerabilities present in the application configuration. Application configuration rules are stored in YAML text files and can be customized to the needs of developers. Application configuration auditing for applications like Drupal and WordPress and DNN CMS is coming. Audit application code by static analysis. DevAudit currently supports static analysis of .NET CIL bytecode. Analyzers reside in external script files and can be fully customized based on the needs of the developer. Support for C# source code analysis via Roslyn, PHP7 source code and many more languages and external static code analysis tools is coming. Remote agentless auditing. DevAudit can connect to remote hosts via SSH with identical auditing features available in remote environments as in local environments. Only a valid SSH login is required to audit remote hosts and DevAudit running on Windows can connect to and audit Linux hosts over SSH. On Windows DevAudit can also remotely connect to and audit other Windows machines using WinRM. Agentless Docker container auditing. DevAudit can audit running Docker containers from the Docker host with identical features available in container environments as in local environments. GitHub repository auditing. DevAudit can connect directly to a project repository hosted on GitHub and perform package source and application configuration auditing. PowerShell support. DevAudit can also be run inside the PowerShell system administration environment as cmdlets. Work on PowerShell support is paused at present but will resume in the near future with support for cross-platform Powershell both on Windows and Linux. RequirementsDevAudit is a .NET 4.6 application. To install locally on your machine you will need either the Microsoft .NET Framework 4.6 runtime on Windows, or Mono 4.4+ on Linux. .NET 4.6 should be already installed on most recent versions of Windows, if not then it is available as a Windows feature that can be turned on or installed from the Programs and Features control panel applet on consumer Windows, or from the Add Roles and Features option in Server Manager on server versions of Windows. For older versions of Windows, the .NET 4.6 installer from Microsoft can be found here.On Linux the minimum version of Mono supported is 4.4. Although DevAudit runs on Mono 4 (with one known issue) it’s recommended that Mono 5 be installed. Mono 5 brings many improvements to the build and runtime components of Mono that benefit DevAudit.The existing Mono packages provided by your distro are probably not Mono 5 as yet, so you will have to install Mono packages manually to be able to use Mono 5. Installation instructions for the most recent packages provided by the Mono project for several major Linux distros are here. It is recommended you have the mono-devel package installed as this will reduce the chances of missing assemblies.Alternatively on Linux you can use the DevAudit Docker image if you do not wish to install Mono and already have Docker installed on your machine.InstallationDevAudit can be installed by the following methods:Building from source.Using a binary release archive file downloaded from Github for Windows or Linux.Using the release MSI installer downloaded from Github for Windows.Using the Chocolatey package manager on Windows.Pulling the ossindex/devaudit image from Docker Hub on Linux.Building from source on LinuxPre-requisites: Mono 4.4+ (Mono 5 recommended) and the mono-devel package which provides the compiler and other tools needed for building Mono apps. Your distro should have packages for at least Mono version 4.4 and above, otherwise manual installation instructions for the most recent packages provided by the Mono project for several major Linux distros are here Clone the DevAudit repository from https://github.com/OSSIndex/DevAudit.git Run the build.sh script in the root DevAudit directory. DevAudit should compile without any errors. Run ./devaudit –help and you should see the DevAudit version and help screen printed. Note that NuGet on Linux may occasionally exit with Error: NameResolutionFailure which seems to be a transient problem contacting the servers that contain the NuGet packages. You should just run ./build.sh again until the build completes normally.Building from source on WindowsPre-requisites: You must have one of: A .NET Framework 4.6 SDK or developer pack.Visual Studio 2015.Clone the DevAudit repository from https://github.com/OSSIndex/DevAudit.git From a visual Studio 2015 or ,NETRun the build.cmd script in the root DevAudit directory. DevAudit should compile without any errors. Run ./devaudit –help and you should see the DevAudit version and help screen printed. Installing from the release archive files on Windows on LinuxPre-requisites: You must have Mono 4.4+ on Linux or .NET 4.6 on Windows. Download the latest release archive file for Windows or Linux from the project releases page. Unpack this file to a directory. From the directory where you unpacked the release archive run devaudit –help on Windows or ./devaudit –help on Linux. You should see the version and help screen printed. (Optional) Add the DevAudit installation directory to your PATH environment variable Installing using the MSI Installer on WindowsThe MSI installer for a release can be found on the Github releases page.Click on the releases link near the top of the page.Identify the release you would like to install.A “DevAudit.exe" link should be visible for each release that has a pre-built installer.Download the file and execute the installer. You will be guided through a simple installation.Open a new command prompt or PowerShell window in order to have DevAudit in path.Run DevAudit.Installing using Chocolatey on WindowsDevAudit is also available on Chocolatey.Install Chocolatey.Open an admin console or PowerShell window.Type choco install devauditRun DevAudit.Installing using Docker on LinuxPull the Devaudit image from Docker Hub: docker pull ossindex/devaudit. The image tagged ossindex/devaudit:latest (which is the default image that is downloaded) is built from the most recent release while ossindex/devaudit:unstable is built on the master branch of the source code and contains the newest additions albeit with less testing.ConceptsAudit TargetRepresents a logical group of auditing functions. DevAudit currently supports the following audit targets:Package Source. A package source manages application and library dependencies using a package manager. Package managers install, remove or update applications and library dependencies for an operating system like Debian Linux, or for a development language or framework like .NET or nodejs. Examples of package sources are dpkg, yum, Chocolatey, Composer, and Bower. DevAudit audits the names and versions of installed packages against vulnerabilities reported for specific versions of those packages.Application. An application like Drupal or a custom application built using a framework like ASP.NET. DevAudit audits applications and application modules and plugins against vulnerabilities reported for specific versions of application binaries and modules and plugins. DevAudit can also audit application configurations for known vulnerabilities, and perform static analysis on application code looking for known weaknesses.Application Server. Application servers provide continuously running services or daemons like a web or database server for other applications to use, or for users to access services like authentication. Examples of application servers are the OpenSSH sshd and Apache httpd servers. DevAudit can audit application server binaries, modules and plugins against vulnerabilities reported for specific versions as well as audit server configurations for known server configuration vulnerabilities and weaknesses.Audit EnvironmentRepresents a logical environment where audits against audit targets are executed. Audit environments abstract the I/O and command executions required for an audit and allow identical functions to be performed against audit targets on whatever physical or network location the target’s files and executables are located. The follwing environments are currently supported :Local. This is the default audit environment where audits are executed on the local machine.SSH. Audits are executed on a remote host connected over SSH. It is not necessary to have DevAudit installed on the remote host.WinRM. Audits are executed on a remote Windows host connected over WinRM. It is not necessary to have DevAudit installed on the remote host.Docker. Audits are executed on a running Docker container. It is not necessary to have DevAudit installed on the container image.GitHub. Audits are executed on a GitHub project repository’s file-system directly. It is not necessary to checkout or download the project locally to perform the audit.Audit OptionsThese are different options that can be enabled for the audit. You can specify options that apply to the DevAudit program for example, to run in non-interactive mode, as well as options that apply to the target e.g if you set the AppDevMode option for auditing ASP.NET applications to true then certain audit rules will not be enabled.Basic UsageThe CLI is the primary interface to the DevAudit program and is suitable both for interactive use and for non-interactive use in scheduled tasks, shell scripts, CI build pipelines and post-build tasks in developer IDEs. The basic DevAudit CLI syntax is:devaudit TARGET [ENVIRONMENT] | [OPTIONS]where TARGET specifies the audit target ENVIRONMENT specifies the audit environment and OPTIONS specifies the options for the audit target and environment. There are 2 ways to specify options: program options and general audit options that apply to more than one target can be specified directly on the command-line as parameters . Target-specific options can be specified with the -o options using the format: -o OPTION1=VALUE1,OPTION2=VALUE2,…. with commas delimiting each option key-value pair.If you are piping or redirecting the program output to a file then you should always use the -n –non-interactive option to disable any interactive user interface features and animations.When specifying file paths, an @ prefix before a path indicates to DevAudit that this path is relative to the root directory of the audit target e.g if you specify: -r c:\myproject -b @bin\Debug\app2.exe DevAudit considers the path to the binary file as c:\myproject\bin\Debug\app2.exe.Audit TargetsPackage Sources msi Do a package audit of the Windows Installer MSI package source on Windows machines. choco Do a package audit of packages installed by the Choco package manager. oneget Do a package audit of the system OneGet package source on Windows. nuget Do a package audit of a NuGet v2 package source. You must specify the location of the NuGet packages.config file you wish to audit using the -f or –file option otherwise the current directory will be searched for this file. bower Do a package audit of a Bower package source. You must specify the location of the Bower packages.json file you wish to audit using the -f or –file option otherwise the current directory will be searched for this file. composer Do a package audit of a Composer package source. You must specify the location of the Composer composer.json file you wish to audit using the -f or –file option otherwise the current directory will be searched for this file. dpkg Do a package audit of the system dpkg package source on Debian Linux and derivatives. rpm Do a package audit of the system RPM package source on RedHat Linux and derivatives. yum Do a package audit of the system Yum package source on RedHat Linux and derivatives. For every package source the following general audit options can be used: -f –file Specify the location of the package manager configuration file if needed. The NuGet, Bower and Composer package sources require this option. –list-packages Only list the packages in the package source scanned by DevAudit. –list-artifacts Only list the artifacts found on OSS Index for packages scanned by DevAudit. Package sources tagged [Experimental] are only available in the master branch of the source code and may have limited back-end OSS Index support. However you can always list the packages scanned and artifacts available on OSS Index using the list-packages and list-artifacts options.Applications aspnet Do an application audit on a ASP.NET application. The relevant options are: -r –root-directory Specify the root directory of the application. This is just the top-level application directory that contains files like Global.asax and Web.config.-b –application-binary Specify the application binary. The is the .NET assembly that contains the application’s .NET bytecode. This file is usually a .DLL and located in the bin sub-folder of the ASP.NET application root directory.-c –configuration-file or -o AppConfig=configuration-file Specifies the ASP.NET application configuration file. This file is usually named Web.config and located in the application root directory. You can override the default @Web.config value with this option.-o AppDevMode=enabled Specifies that application development mode should be enabled for the audit. This mode can be used when auditing an application that is under development. Certain configuration rules that are tagged as disabled for AppDevMode (e.g running the application in ASP.NET debug mode) will not be enabled during the audit. netfx Do an application audit on a .NET application. The relevant options are: -r –root-directory Specify the root directory of the application. This is just the top-level application directory that contains files like App.config.-b –application-binary Specify the application binary. The is the .NET assembly that contains the application’s .NET bytecode. This file is usually a .DLL and located in the bin sub-folder of the ASP.NET application root directory.-c –configuration-file or -o AppConfig=configuration-file Specifies the .NET application configuration file. This file is usually named App.config and located in the application root directory. You can override the default @App.config value with this option.-o GendarmeRules=RuleLibrary Specifies that the Gendarme static analyzer should enabled for the audit with rules from the specified rules library used. For example: devaudit netfx -r /home/allisterb/vbot-debian/vbot.core -b @bin/Debug/vbot.core.dll –skip-packages-audit -o GendarmeRules=Gendarme.Rules.Naming will run the Gendarme static analyzer on the vbot.core.dll assembly using rules from Gendarme.Rules.Naming library. The complete list of rules libraries is (taken from the Gendarme wiki):Gendarme.Rules.BadPracticeGendarme.Rules.ConcurrencyGendarme.Rules.CorrectnessGendarme.Rules.DesignGendarme.Rules.Design.GenericGendarme.Rules.Design.LinqGendarme.Rules.ExceptionsGendarme.Rules.GendarmeGendarme.Rules.GlobalizationGendarme.Rules.InteroperabilityGendarme.Rules.Interoperability.ComGendarme.Rules.MaintainabilityGendarme.Rules.NUnitGendarme.Rules.NamingGendarme.Rules.PerformanceGendarme.Rules.PortabilityGendarme.Rules.SecurityGendarme.Rules.Security.CasGendarme.Rules.SerializationGendarme.Rules.SmellsGendarme.Rules.Ui drupal7 Do an application audit on a Drupal 7 application. -r –root-directory Specify the root directory of the application. This is just the top-level directory of your Drupal 7 install. drupal8 Do an application audit on a Drupal 8 application. -r –root-directory Specify the root directory of the application. This is just the top-level directory of your Drupal 8 install.All applications also support the following common options for auditing the application modules or plugins: –list-packages Only list the application plugins or modules scanned by DevAudit. –list-artifacts Only list the artifacts found on OSS Index for application plugins and modules scanned by DevAudit. –skip-packages-audit Only do an appplication configuration or code analysis audit and skip the packages audit. Application Servers sshd Do an application server audit on an OpenSSH sshd-compatible server. httpd Do an application server audit on an Apache httpd-compatible server. mysql Do an application server audit on a MySQL-compatible server (like MariaDB or Oracle MySQL.) nginx Do an application server audit on a Nginx server. pgsql Do an application server audit on a PostgreSQL server. This is an example command line for an application server audit: ./devaudit httpd -i httpd-2.2 -r /usr/local/apache2/ -c @conf/httpd.conf -b @bin/httpd which audits an Apache Httpd server running on a Docker container named httpd-2.2.The following are audit options common to all application servers:-r –root-directory Specifies the root directory of the server. This is just the top-level of your server filesystem and defaults to / unless you want a different server root.-c –configuration-file Specifies the server configuration file. e.g in the above audit the Apache configuration file is located at /usr/local/apache2/conf/httpd.conf. If you don’t specify the configuration file DevAudit will attempt to auto-detect the configuration file for the server selected.-b –application-binary Specifies the server binary. e.g in the above audit the Apache binary is located at /usr/local/apache2/bin/httpd. If you don’t specify the binary path DevAudit will attempt to auto-detect the server binary for the server selected.Application servers also support the following common options for auditing the server modules or plugins: –list-packages Only list the application plugins or modules scanned by DevAudit. –list-artifacts Only list the artifacts found on OSS Index for application plugins and modules scanned by DevAudit. –skip-packages-audit Only do a server configuration audit and skip the packages audit. EnvironmentsThere are currently 5 audit environment supported: local, remote hosts over SSH, remote hosts over WinRM, Docker containers, and GitHub. Local environments are used by default when no other environment options are specified.SSHThe SSH environment allows audits to be performed on any remote hosts accessible over SSH without requiring DevAudit to be installed on the remote host. SSH environments are cross-platform: you can connect to a Linux remote host from a Windows machine running DevAudit. An SSH environment is created by the following options:-s SERVER [–ssh-port PORT] -u USER [-k KEYFILE] [-p | –password-text PASSWORD]-s SERVER Specifies the remote host or IP to connect to via SSH.-u USER Specifies the user to login to the server with.–ssh-port PORT Specifies the port on the remote host to connect to. The default is 22.-k KEYFILE Specifies the OpenSSH compatible private key file to use to connect to the remote server. Currently only RSA or DSA keys in files in the PEM format are supported.-p Provide a prompt with local echo disabled for interactive entry of the server password or key file passphrase.–password-text PASSWORD Specify the user password or key file passphrase as plaintext on the command-line. Note that on Linux when your password contains special characters you should use enclose the text on the command-line using single-quotes like ‘MyPa

Link: http://www.kitploit.com/2018/12/devaudit-open-source-cross-platform.html

Knock v.4.1.1 – Subdomain Scan

Knockpy is a python tool designed to enumerate subdomains on a target domain through a wordlist. It is designed to scan for DNS zone transfer and to try to bypass the wildcard DNS record automatically if it is enabled. Now knockpy supports queries to VirusTotal subdomains, you can setting the API_KEY within the config.json file.Very simply$ knockpy domain.comExport full report in JSONIf you want to save full log like this one just type:$ knockpy domain.com –json InstallPrerequisitesPython 2.7.6DependenciesDnspython$ sudo apt-get install python-dnspythonInstalling$ git clone https://github.com/guelfoweb/knock.git$ cd knock$ nano knockpy/config.json <- set your virustotal API_KEY$ sudo python setup.py installNote that it's recommended to use Google DNS: and Knockpy arguments$ knockpy -husage: knockpy [-h] [-v] [-w WORDLIST] [-r] [-c] [-j] domain___________________________________________knock subdomain scanknockpy v.4.1Author: Gianni 'guelfoweb' AmatoGithub: https://github.com/guelfoweb/knock___________________________________________positional arguments: domain target to scan, like domain.comoptional arguments: -h, --help show this help message and exit -v, --version show program's version number and exit -w WORDLIST specific path to wordlist file -r, --resolve resolve ip or domain name -c, --csv save output in csv -f, --csvfields add fields name to the first row of csv output file -j, --json export full report in JSONexample: knockpy domain.com knockpy domain.com -w wordlist.txt knockpy -r domain.com or IP knockpy -c domain.com knockpy -j domain.comFor virustotal subdomains support you can setting your API_KEY in the config.json file. ExampleSubdomain scan with internal wordlist$ knockpy domain.comSubdomain scan with external wordlist$ knockpy domain.com -w wordlist.txtResolve domain name and get response headers$ knockpy -r domain.com [or IP]+ checking for virustotal subdomains: YES[ "partnerissuetracker.corp.google.com", "issuetracker.google.com", "r5---sn-ogueln7k.c.pack.google.com", "cse.google.com", .......too long....... "612.talkgadget.google.com", "765.talkgadget.google.com", "973.talkgadget.google.com"]+ checking for wildcard: NO+ checking for zonetransfer: NO+ resolving target: YES{ "zonetransfer": { "enabled": false, "list": [] }, "target": "google.com", "hostname": "google.com", "virustotal": [ "partnerissuetracker.corp.google.com", "issuetracker.google.com", "r5---sn-ogueln7k.c.pack.google.com", "cse.google.com", "mt0.google.com", "earth.google.com", "clients1.google.com", "pki.google.com", "www.sites.google.com", "appengine.google.com", "fcmatch.google.com", "dl.google.com", "translate.google.com", "feedproxy.google.com", "hangouts.google.com", "news.google.com", .......too long....... "100.talkgadget.google.com", "services.google.com", "301.talkgadget.google.com", "857.talkgadget.google.com", "600.talkgadget.google.com", "992.talkgadget.google.com", "93.talkgadget.google.com", "storage.cloud.google.com", "863.talkgadget.google.com", "maps.google.com", "661.talkgadget.google.com", "325.talkgadget.google.com", "sites.google.com", "feedburner.google.com", "support.google.com", "code.google.com", "562.talkgadget.google.com", "190.talkgadget.google.com", "58.talkgadget.google.com", "612.talkgadget.google.com", "765.talkgadget.google.com", "973.talkgadget.google.com" ], "alias": [], "wildcard": { "detected": {}, "test_target": "eqskochdzapjbt.google.com", "enabled": false, "http_response": {} }, "ipaddress": [ "" ], "response_time": "0.0351989269257", "http_response": { "status": { "reason": "Found", "code": 302 }, "http_headers": { "content-length": "256", "location": "http://www.google.it/?gfe_rd=cr&ei=60WIWdmnDILCXoKbgfgK", "cache-control": "private", "date": "Mon, 07 Aug 2017 10:50:19 GMT", "referrer-policy": "no-referrer", "content-type": "text/html; charset=UTF-8" } }}Save scan output in CSV$ knockpy -c domain.comExport full report in JSON$ knockpy -j domain.com Talk aboutEthical Hacking and Penetration Testing Guide Book by Rafay Baloch.Knockpy comes pre-installed on the following security distributions for penetration test:BackBox LinuxPentestBox for WindowsBuscador Investigative Operating System OtherThis tool is currently maintained by Gianni 'guelfoweb' Amato, who can be contacted at guelfoweb@gmail.com or twitter @guelfoweb. Suggestions and criticism are welcome.Download Knock

Link: http://www.kitploit.com/2018/12/knock-v411-subdomain-scan.html

Cameradar v2.1.0 – Hacks Its Way Into RTSP Videosurveillance Cameras

  An RTSP stream access tool that comes with its libraryCameradar allows you toDetect open RTSP hosts on any accessible target hostDetect which device model is streamingLaunch automated dictionary attacks to get their stream route (e.g.: /live.sdp)Launch automated dictionary attacks to get the username and password of the camerasRetrieve a complete and user-friendly report of the resultsDocker Image for CameradarInstall docker on your machine, and run the following command:docker run -t ullaakut/cameradar -t <other command-line options>See command-line options.e.g.: docker run -t ullaakut/cameradar -t -l will scan the ports 554 and 8554 of hosts on the subnetwork and attack the discovered RTSP streams and will output debug logs.YOUR_TARGET can be a subnet (e.g.:, an IP (e.g.:, or a range of IPs (e.g.: you want to get the precise results of the nmap scan in the form of an XML file, you can add -v /your/path:/tmp/cameradar_scan.xml to the docker run command, before ullaakut/cameradar.If you use the -r and -c options to specify your custom dictionaries, make sure to also use a volume to add them to the docker container. Example: docker run -t -v /path/to/dictionaries/:/tmp/ ullaakut/cameradar -r /tmp/myroutes -c /tmp/mycredentials.json -t mytargetInstalling the binary on your machineOnly use this solution if for some reason using docker is not an option for you or if you want to locally build Cameradar on your machine.DependenciesgodepInstalling depOSX: brew install dep and brew upgrade depOthers: Download the release package for your OS hereSteps to installMake sure you installed the dependencies mentionned above.go get github.com/Ullaakut/cameradarcd $GOPATH/src/github.com/Ullaakut/cameradardep ensurecd cameradargo installThe cameradar binary is now in your $GOPATH/bin ready to be used. See command line options here.LibraryDependencies of the librarycurl-dev / libcurl (depending on your OS)nmapgithub.com/pkg/errorsgopkg.in/go-playground/validator.v9github.com/andelf/go-curlInstalling the librarygo get github.com/Ullaakut/cameradarAfter this command, the cameradar library is ready to use. Its source will be in:$GOPATH/src/pkg/github.com/Ullaakut/cameradarYou can use go get -u to update the package.Here is an overview of the exposed functions of this library:DiscoveryYou can use the cameradar library for simple discovery purposes if you don’t need to access the cameras but just to be aware of their existence.This describes the nmap time presets. You can pass a value between 1 and 5 as described in this table, to the NmapRun function.AttackIf you already know which hosts and ports you want to attack, you can also skip the discovery part and use directly the attack functions. The attack functions also take a timeout value as a parameter.Data modelsHere are the different data models useful to use the exposed functions of the cameradar library.Dictionary loadersThe cameradar library also provides two functions that take file paths as inputs and return the appropriate data models filled.ConfigurationThe RTSP port used for most cameras is 554, so you should probably specify 554 as one of the ports you scan. Not specifying any ports to the cameradar application will scan the 554 and 8554 ports.docker run -t –net=host ullaakut/cameradar -p “18554,19000-19010" -t localhost will scan the ports 18554, and the range of ports between 19000 and 19010 on localhost.You can use your own files for the ids and routes dictionaries used to attack the cameras, but the Cameradar repository already gives you a good base that works with most cameras, in the /dictionaries folder.docker run -t -v /my/folder/with/dictionaries:/tmp/dictionaries \ ullaakut/cameradar \ -r "/tmp/dictionaries/my_routes" \ -c "/tmp/dictionaries/my_credentials.json" \ -t will put the contents of your folder containing dictionaries in the docker image and will use it for the dictionary attack instead of the default dictionaries provided in the cameradar repo.Check camera accessIf you have VLC Media Player, you should be able to use the GUI or the command-line to connect to the RTSP stream using this format : rtsp://username:password@address:port/routeWith the above result, the RTSP URL would be rtsp://admin:12345@ line options"-t, –target": Set target. Required. Target can be a file (see instructions on how to format the file), an IP, an IP range, a subnetwork, or a combination of those."-p, –ports": (Default: 554,8554) Set custom ports."-s, –speed": (Default: 4) Set custom nmap discovery presets to improve speed or accuracy. It’s recommended to lower it if you are attempting to scan an unstable and slow network, or to increase it if on a very performant and reliable network. See this for more info on the nmap timing templates."-T, –timeout": (Default: 2000) Set custom timeout value in miliseconds after which an attack attempt without an answer should give up. It’s recommended to increase it when attempting to scan unstable and slow networks or to decrease it on very performant and reliable networks."-r, –custom-routes": (Default: <CAMERADAR_GOPATH>/dictionaries/routes) Set custom dictionary path for routes"-c, –custom-credentials": (Default: <CAMERADAR_GOPATH>/dictionaries/credentials.json) Set custom dictionary path for credentials"-o, –nmap-output": (Default: /tmp/cameradar_scan.xml) Set custom nmap output path"-l, –log": Enable debug logs (nmap requests, curl describe requests, etc.)"-h" : Display the usage informationFormat input fileThe file can contain IPs, hostnames, IP ranges and subnetwork, separated by newlines. Example: VariablesCAMERADAR_TARGETThis variable is mandatory and specifies the target that cameradar should scan and attempt to access RTSP streams on.Examples: variable is optional and allows you to specify the ports on which to run the scans.Default value: 554,8554It is recommended not to change these except if you are certain that cameras have been configured to stream RTSP over a different port. 99.9% of cameras are streaming on these ports.CAMERADAR_NMAP_OUTPUT_FILEThis variable is optional and allows you to specify on which file nmap will write its output.Default value: /tmp/cameradar_scan.xmlThis can be useful only if you want to read the files yourself, if you don’t want it to write in your /tmp folder, or if you want to use only the RunNmap function in cameradar, and do its parsing manually.CAMERADAR_CUSTOM_ROUTES, CAMERADAR_CUSTOM_CREDENTIALSThese variables are optional, allowing to replace the default dictionaries with custom ones, for the dictionary attack.Default values: <CAMERADAR_GOPATH>/dictionaries/routes and <CAMERADAR_GOPATH>/dictionaries/credentials.jsonCAMERADAR_SPEEDThis optional variable allows you to set custom nmap discovery presets to improve speed or accuracy. It’s recommended to lower it if you are attempting to scan an unstable and slow network, or to increase it if on a very performant and reliable network. See this for more info on the nmap timing templates.Default value: 4CAMERADAR_TIMEOUTThis optional variable allows you to set custom timeout value in miliseconds after which an attack attempt without an answer should give up. It’s recommended to increase it when attempting to scan unstable and slow networks or to decrease it on very performant and reliable networks.Default value: 2000CAMERADAR_LOGSThis optional variable allows you to enable a more verbose output to have more information about what is going on.It will output nmap results, cURL requests, etc.Default: falseContributionBuildDocker buildTo build the docker image, simply run docker build -t . cameradar in the root of the project.Your image will be called cameradar and NOT ullaakut/cameradar.Go buildTo build the project without docker:Install depOSX: brew install dep and brew upgrade depOthers: Download the release package for your OS heredep ensurego build to build the librarycd cameradar && go build to build the binaryThe cameradar binary is now in the root of the directory.See the contribution document to get started.Frequently Asked QuestionsCameradar does not detect any camera!That means that either your cameras are not streaming in RTSP or that they are not on the target you are scanning. In most cases, CCTV cameras will be on a private subnetwork, isolated from the internet. Use the -t option to specify your target.Cameradar detects my cameras, but does not manage to access them at all!Maybe your cameras have been configured and the credentials / URL have been changed. Cameradar only guesses using default constructor values if a custom dictionary is not provided. You can use your own dictionaries in which you just have to add your credentials and RTSP routes. To do that, see how the configuration works. Also, maybe your camera’s credentials are not yet known, in which case if you find them it would be very nice to add them to the Cameradar dictionaries to help other people in the future.What happened to the C++ version?You can still find it under the 1.1.4 tag on this repo, however it was less performant and stable than the current version written in Golang.How to use the Cameradar library for my own project?See the example in /cameradar. You just need to run go get github.com/Ullaakut/cameradar and to use the cmrdr package in your code. You can find the documentation on godoc.I want to scan my own localhost for some reason and it does not work! What’s going on?Use the –net=host flag when launching the cameradar image, or use the binary by running go run cameradar/cameradar.go or installing itI don’t see a colored output :(You forgot the -t flag before ullaakut/cameradar in your command-line. This tells docker to allocate a pseudo-tty for cameradar, which makes it able to use colors.I don’t have a camera but I’d like to try Cameradar!Simply run docker run -p 8554:8554 -e RTSP_USERNAME=admin -e RTSP_PASSWORD=12345 -e RTSP_PORT=8554 ullaakut/rtspatt and then run cameradar and it should guess that the username is admin and the password is 12345. You can try this with any default constructor credentials (they can be found here)ExamplesRunning cameradar on your own machine to scan for default portsdocker run –net=host -t ullaakut/cameradar -t localhostRunning cameradar with an input file, logs enabled on port 8554docker run -v /tmp:/tmp –net=host -t ullaakut/cameradar -t /tmp/test.txt -p 8554 -lDownload Cameradar

Link: http://feedproxy.google.com/~r/PentestTools/~3/1bUGqwOggUY/cameradar-v210-hacks-its-way-into-rtsp.html

Evilginx2 v2.2.0 – Standalone Man-In-The-Middle Attack Framework Used For Phishing Login Credentials Along With Session Cookies, Allowing For The Bypass Of 2-Factor Authentication

evilginx2 is a man-in-the-middle attack framework used for phishing login credentials along with session cookies, which in turn allows to bypass 2-factor authentication protection.This tool is a successor to Evilginx, released in 2017, which used a custom version of nginx HTTP server to provide man-in-the-middle functionality to act as a proxy between a browser and phished website. Present version is fully written in GO as a standalone application, which implements its own HTTP and DNS server, making it extremely easy to set up and use.VideoSee evilginx2 in action here:Evilginx 2 – Next Generation of Phishing 2FA Tokens from breakdev.org on Vimeo.Write-upIf you want to learn more about this phishing technique, I’ve published an extensive blog post about evilginx2 here:https://breakdev.org/evilginx-2-next-generation-of-phishing-2fa-tokensPhishlet Masters – Hall of FamePlease thank the following contributors for devoting their precious time to deliver us fresh phishlets! (in order of first contributions)@cust0msync – Amazon, Reddit@white_fi – Twitterrvrsh3ll @424f424f – CitrixInstallationYou can either use a precompiled binary package for your architecture or you can compile evilginx2 from source.You will need an external server where you’ll host your evilginx2 installation. I personally recommend Digital Ocean and if you follow my referral link, you will get an extra $10 to spend on servers for free.Evilginx runs very well on the most basic Debian 8 VPS.Installing from sourceIn order to compile from source, make sure you have installed GO of version at least 1.10.0 (get it from here) and that $GOPATH environment variable is set up properly (def. $HOME/go).After installation, add this to your ~/.profile, assuming that you installed GO in /usr/local/go:export GOPATH=$HOME/goexport PATH=$PATH:/usr/local/go/bin:$GOPATH/binThen load it with source ~/.profiles.Now you should be ready to install evilginx2. Follow these instructions:sudo apt-get install git makego get -u github.com/kgretzky/evilginx2cd $GOPATH/src/github.com/kgretzky/evilginx2makeYou can now either run evilginx2 from local directory like:sudo ./bin/evilginx -p ./phishlets/or install it globally:sudo make installsudo evilginxInstructions above can also be used to update evilginx2 to the latest version.Installing with DockerYou can launch evilginx2 from within Docker. First build the container:docker build . -t evilginx2Then you can run the container:docker run -it -p 53:53/udp -p 80:80 -p 443:443 evilginx2Phishlets are loaded within the container at /app/phishlets, which can be mounted as a volume for configuration.Installing from precompiled binary packagesGrab the package you want from here and drop it on your box. Then do:unzip .zip -d <package_name>cd <package_name>If you want to do a system-wide install, use the install script with root privileges:chmod 700 ./install.shsudo ./install.shsudo evilginxor just launch evilginx2 from the current directory (you will also need root privileges):chmod 700 ./evilginxsudo ./evilginxUsageIMPORTANT! Make sure that there is no service listening on ports TCP 443, TCP 80 and UDP 53. You may need to shutdown apache or nginx and any service used for resolving DNS that may be running. evilginx2 will tell you on launch if it fails to open a listening socket on any of these ports.By default, evilginx2 will look for phishlets in ./phishlets/ directory and later in /usr/share/evilginx/phishlets/. If you want to specify a custom path to load phishlets from, use the -p <phishlets_dir_path> parameter when launching the tool.Usage of ./evilginx: -debug Enable debug output -developer Enable developer mode (generates self-signed certificates for all hostnames) -p string Phishlets directory pathYou should see evilginx2 logo with a prompt to enter commands. Type help or help <command> if you want to see available commands or more detailed information on them.Getting startedTo get up and running, you need to first do some setting up.At this point I assume, you’ve already registered a domain (let’s call it yourdomain.com) and you set up the nameservers (both ns1 and ns2) in your domain provider’s admin panel to point to your server’s IP (e.g. = = up your server’s domain and IP using following commands:config domain yourdomain.comconfig ip you can set up the phishlet you want to use. For the sake of this short guide, we will use a LinkedIn phishlet. Set up the hostname for the phishlet (it must contain your domain obviously):phishlets hostname linkedin my.phishing.hostname.yourdomain.comAnd now you can enable the phishlet, which will initiate automatic retrieval of LetsEncrypt SSL/TLS certificates if none are locally found for the hostname you picked:phishlets enable linkedinYour phishing site is now live. Think of the URL, you want the victim to be redirected to on successful login and get the phishing URL like this (victim will be redirected to https://www.google.com):phishlets get-url linkedin https://www.google.comRunning phishlets will only respond to tokenized links, so any scanners who scan your main domain will be redirected to URL specified as redirect_url under config. If you want to hide your phishlet and make it not respond even to valid tokenized phishing URLs, use phishlet hide/unhide <phishlet> command.You can monitor captured credentials and session cookies with:sessionsTo get detailed information about the captured session, with the session cookie itself (it will be printed in JSON format at the bottom), select its session ID:sessions <id>The captured session cookie can be copied and imported into Chrome browser, using EditThisCookie extension.Important! If you want evilginx2 to continue running after you log out from your server, you should run it inside a screen session.CreditsHuge thanks to Simone Margaritelli (@evilsocket) for bettercap and inspiring me to learn GO and rewrite the tool in that language!Download Evilginx2

Link: http://www.kitploit.com/2018/12/evilginx2-v220-standalone-man-in-middle.html

MEC v1.4.0 – Mass Exploit Console

massExploitConsolea collection of hacking tools with a cli ui.Disclaimerplease use this tool only on authorized systems, im not responsible for any damage caused by users who ignore my warningexploits are adapted from other sources, please refer to their author infoplease note, due to my limited programming experience (it’s my first Python project), you can expect some silly bugsFeaturesan easy-to-use cli uiexecute any adpated exploits with process-level concurrencysome built-in exploits (automated)hide your ip addr using proxychains4 and ss-proxy (built-in)zoomeye host scan (10 threads)a simple baidu crawler (multi-threaded)censys host scanGetting startedgit clone https://github.com/jm33-m0/massExpConsole.git && cd massExpConsole && ./install.pywhen installing pypi deps, apt-get install libncurses5-dev (for Debian-based distros) might be needednow you should be good to go (if not, please report missing deps here)type proxy command to run a pre-configured Shadowsocks socks5 proxy in the background, vim ./data/ss.json to edit proxy config. and, ss-proxy exits with mec.pyRequirementsGNU/Linux, WSL, MacOS (not tested), fully tested under Arch Linux, Kali Linux (Rolling, 2018), Ubuntu Linux (16.04 LTS) and Fedora 25 (it will work on other distros too as long as you have dealt with all deps)Python 3.5 or later (or something might go wrong, https://github.com/jm33-m0/massExpConsole/issues/7#issuecomment-305962655)proxychains4 (in $PATH), used by exploiter, requires a working socks5 proxy (you can modify its config in mec.py)Java is required when using Java deserialization exploits, you might want to install openjdk-8-jre if you haven’t installed it yetnote that you have to install all the deps of your exploits or tools as wellUsagejust run mec.py, if it complains about missing modules, install themif you want to add your own exploit script (or binary file, whatever):cd exploits, mkdir your exploit should take the last argument passed to it as its target, dig into mec.py to know morechmod +x <exploit> to make sure it can be executed by current useruse attack command then m to select your custom exploittype help in the console to see all available featureszoomeye requires a valid user account config file zoomeye.conf Download MEC

Link: http://www.kitploit.com/2018/12/mec-v140-mass-exploit-console.html

Hayat – Auditing & Hardening Script For Google Cloud Platform

Hayat is a auditing & hardening script for Google Cloud Platform services such as:Identity & Access ManagementNetworkingVirtual MachinesStorageCloud SQL InstancesKubernetes Clustersfor now.Identity & Access ManagementEnsure that corporate login credentials are used instead of Gmail accounts.Ensure that there are only GCP-managed service account keys for each service account.Ensure that ServiceAccount has no Admin privileges.Ensure that IAM users are not assigned Service Account User role at project level.NetworkingEnsure the default network does not exist in a project.Ensure legacy networks does not exists for a project.Ensure that DNSSEC is enabled for Cloud DNS.Ensure that RSASHA1 is not used for key-signing key in Cloud DNS DNSSEC.Ensure that RSASHA1 is not used for zone-signing key in Cloud DNS DNSSEC.Ensure that RDP access is restricted from the Internet.Ensure Private Google Access is enabled for all subnetwork in VPC Network.Ensure VPC Flow logs is enabled for every subnet in VPC Network.Virtual MachinesEnsure that instances are not configured to use the default service account with full access to all Cloud APIs.Ensure “Block Project-wide SSH keys" enabled for VM instances.Ensure oslogin is enabled for a Project.Ensure ‘Enable connecting to serial ports’ is not enabled for VM Instance.Ensure that IP forwarding is not enabled on Instances.StorageEnsure that Cloud Storage bucket is not anonymously or publicly accessible.Ensure that logging is enabled for Cloud storage bucket.Cloud SQL Database ServicesEnsure that Cloud SQL database instance requires all incoming connections to use SSL.Ensure that Cloud SQL database Instances are not open to the world.Ensure that MySql database instance does not allow anyone to connect with administrative privileges.Ensure that MySQL Database Instance does not allows root login from any host.Kubernetes EngineEnsure Stackdriver Logging is set to Enabled on Kubernetes Engine Clusters.Ensure Stackdriver Monitoring is set to Enabled on Kubernetes Engine Clusters.Ensure Legacy Authorization is set to Disabled on Kubernetes Engine Clusters.Ensure Master authorized networks is set to Enabled on Kubernetes Engine Clusters.Ensure Kubernetes Clusters are configured with Labels.Ensure Kubernetes web UI / Dashboard is disabled.Ensure Automatic node repair is enabled for Kubernetes Clusters.Ensure Automatic node upgrades is enabled on Kubernetes Engine Clusters nodes.RequirementsHayat has been written in bash script using gcloud and it’s compatible with Linux and OSX.Usagegit clone https://github.com/DenizParlak/Hayat.git && cd Hayat && chmod +x hayat.sh && ./hayat.shYou can use with specific functions, e.g if you want to scan just Kubernetes Cluster:./hayat.sh –only-kubernetesScreenshotsDownload Hayat

Link: http://feedproxy.google.com/~r/PentestTools/~3/eanL2lSrxVg/hayat-auditing-hardening-script-for.html

MCExtractor – Intel, AMD, VIA &Amp; Freescale Microcode Extraction Tool

Intel, AMD, VIA & Freescale Microcode Extraction ToolMC Extractor News FeedMC Extractor Discussion TopicIntel, AMD & VIA CPU Microcode RepositoriesA. About MC ExtractorMC Extractor is a tool which parses Intel, AMD, VIA and Freescale processor microcode binaries. It can be used by end-users who are looking for all relevant microcode information such as CPUID, Platform, Version, Date, Release, Size, Checksum etc. It is capable of converting Intel microcode containers (dat, inc, h, txt) to binary images for BIOS integration, detecting new/unknown microcodes, checking microcode health, Updated/Outdated status and more. MC Extractor can be also used as a research analysis tool with multiple structures which allow, among others, full parsing & information display of all documented or not microcode Headers. Moreover, with the help of its extensive database, MC Extractor is capable of uniquely categorizing all supported microcodes as well as check for any microcodes which have not been stored at the Microcode Repositories yet.A1. MC Extractor FeaturesSupports all current & legacy Microcodes from 1995 and onwardScans for all Intel, AMD, VIA & Freescale microcodes in one runVerifies all extracted microcode integrity via ChecksumsChecks if all Intel, AMD & VIA microcodes are Latest or OutdatedConverts Intel containers (dat,inc,txt,h) to binary imagesSearches on demand for all microcodes based on CPUIDShows microcode Header structures and details on demandIgnores most false positives based on sanity checksSupports known special, fixed or modded microcodesAbility to quickly add new microcode entries to the databaseAbility to detect Intel Production/Pre-Production Release tagAbility to analyze multiple files by drag & drop or by input pathAbility to ignore extracted duplicates based on name and contentsReports all microcodes which are not found at the Microcode RepositoriesFeatures command line parameters to enhance functionality & assist researchFeatures user friendly messages & proper handling of unexpected code errorsShows results in nice tables with colored text to signify emphasisOpen Source project licensed under GNU GPL v3, comment assisted codeA2. Microcode Repository DatabaseMC Extractor allows end-users and/or researchers to quickly extract, view, convert & report new microcode versions without the use of special tools or Hex Editors. To do that effectively, a database had to be built. The Intel, AMD & VIA CPU Microcode Repositories is a collection of every Intel, AMD & VIA CPU Microcodes we have found. Its existence is very important for MC Extractor as it allows us to continue doing research, find new types of microcode, compare releases for similarities, check for updated binaries etc. Bundled with MC Extractor is a file called MCE.db which is required for the program to run. It includes entries for all Microcode binaries that are available to us. This accommodates primarily two actions: a) Check whether the imported microcode is up to date and b) Help find new Microcode releases sooner by reporting them at the Intel, AMD & VIA CPU Microcode Repositories Discussion thread.A3. Sources and InspirationMC Extractor was initially based on a fraction of Lordkag’s UEFIStrip tool so, first and foremost, I thank him for all his work which inspired this project. Among others, great places to learn about microcodes are Intel’s own download site and official documentation, Intel Microcode Patch Authentication, Coreboot (a,b,c), Microparse by Dominic Chen, Ben Hawkes’s Notes and Research, Richard A Burton’s Microdecode, AIDA64 CPUID dumps, Sandpile CPUID, Free Electrons (a, b), Freescale and many more which I may have forgotten but would have been here otherwise.B. How to use MC ExtractorThere are two ways to use MC Extractor, MCE executable & Command Prompt. The MCE executable allows you to drag & drop one or more firmware and view them one by one or recursively scan entire directories. To manually call MC Extractor, a Command Prompt can be used with -skip as parameter.B1. MC Extractor ExecutableTo use MC Extractor, select one or multiple files and Drag & Drop them to its executable. You can also input certain optional parameters either by running MCE directly or by first dropping one or more files to it. Keep in mind that, due to operating system limitations, there is a limit on how many files can be dropped at once. If the latter is a problem, you can always use the -mass parameter to recursively scan entire directories as explained below.B2. MC Extractor ParametersThere are various parameters which enhance or modify the default behavior of MC Extractor:-? : Displays help & usage screen-skip : Skips welcome & options screen-exit : Skips Press enter to exit prompt-redir : Enables console redirection support-mass : Scans all files of a given directory-info : Displays microcode header(s)-add : Adds new input microcode to DB-dbname : Renames input file based on DB name-cont : Extracts Intel containers (dat,inc,h,txt)-search : Searches for microcodes based on CPUID-last : Shows Latest status based on user input-repo : Builds microcode repositories from inputB3. MC Extractor Error ControlDuring operation, MC Extractor may encounter issues that can trigger Notes, Warnings and/or Errors. Notes (yellow/green color) provide useful information about a characteristic of this particular firmware. Warnings (purple color) notify the user of possible problems that can cause system instability. Errors (red color) are shown when something unexpected or problematic is encountered.C. Download MC ExtractorMC Extractor consists of two files, the executable (MCE.exe or MCE) and the database (MCE.db). An already built/frozen/compiled binary is provided by me for Windows only (icon designed by Alfredo Hernandez). Thus, you don’t need to manually build/freeze/compile MC Extractor under Windows. Instead, download the latest version from the Releases tab, title should be “MC Extractor v1.X.X". You may need to scroll down a bit if there are DB releases at the top. The latter can be used to update the outdated DB which was bundled with the latest executable release, title should be "DB rXX". To extract the already built/frozen/compiled archive, you need to use programs which support RAR5 compression.C1. CompatibilityMC Extractor should work at all Windows, Linux or macOS operating systems which have Python 3.6 support. Windows users who plan to use the already built/frozen/compiled binaries must make sure that they have the latest Windows Updates installed which include all required "Universal C Runtime (CRT)" libraries.C2. Code PrerequisitesTo run MC Extractor’s python script, you need to have the following 3rd party Python modules installed:Coloramapip3 install coloramaPTablepip3 install https://github.com/platomav/PTable/archive/boxchar.zipC3. Build/Freeze/Compile with PyInstallerPyInstaller can build/freeze/compile MC Extractor at all three supported platforms, it is simple to run and gets updated often.Make sure Python 3.6.0 or newer is installed:python –versionUse pip to install PyInstaller:pip3 install pyinstallerUse pip to install colorama:pip3 install coloramaUse pip to install PTable:pip3 install https://github.com/platomav/PTable/archive/boxchar.zipBuild/Freeze/Compile MC Extractor:pyinstaller –noupx –onefile MCE.pyAt dist folder you should find the final MCE executableD. PicturesNote: Some pictures are outdated and depict older MC Extractor versions.Download MCExtractor

Link: http://feedproxy.google.com/~r/PentestTools/~3/UdW1gu5O6Ds/mcextractor-intel-amd-via-freescale.html

Trape v2.0 – People Tracker On The Internet: OSINT Analysis And Research Tool

Trape is a OSINT analysis and research tool, which allows people to track and execute intelligent social engineering attacks in real time. It was created with the aim of teaching the world how large Internet companies could obtain confidential information such as the status of sessions of their websites or services and control over their users through the browser, without them knowing, but It evolves with the aim of helping government organizations, companies and researchers to track the cybercriminals.At the beginning of the year 2018 was presented at BlackHat Arsenal in Singapore: https://www.blackhat.com/asia-18/arsenal.html#jose-pino and in multiple security events worldwide.Some benefitsLOCATOR OPTIMIZATION: Trace the path between you and the target you’re tracking. Each time you make a move, the path will be updated, by means of this the location of the target is obtained silently through a bypass made in the browsers, allowing you not to skip the location request permit on the victim’s side , objective or person and at the same time maintain a precision of 99% in the locator.APPROACH: When you’re close to the target, Trape will tell you.REST API: Generates an API (random or custom), and through this you can control and monitor other Web sites on the Internet remotely, getting the traffic of all visitors. PROCESS HOOKS: Manages social engineering attacks or processes in the target’s browser.— SEVERAL: You can issue a phishing attack of any domain or service in real time as well as send malicious files to compromise the device of a target.— INJECT JS: You keep the JavaScript code running free in real time, so you can manage the execution of a keylogger or your own custom functions in JS which will be reflected in the target’s browser.— SPEECH: A process of audio creation is maintained which is played in the browser of the objective, by means of this you can execute personalized messages in different voices with languages in Spanish and English. PUBLIC NETWORK TUNNEL: Trape has its own API that is linked to ngrok.com to allow the automatic management of public network tunnels; By this you can publish your content of trape server executed locally to the Internet, to manage hooks or public attacks.CLICK ATTACK TO GET CREDENTIALS: Automatically obtains the target credentials, recognizing your connection availability on a social network or Internet service. NETWORK: You can get information about the user’s network.— SPEED: Viewing the target’s network speed. (Ping, download, upload, type connection)— HOSTS OR DEVICES: Here you can get a scan of all the devices that are connected in the target network automatically. PROFILE: Brief summary of the target’s behavior and important additional information about your device.— GPU — ENERGY 30-session recognitionSession recognition is one of trape most interesting attractions, since you as a researcher can know remotely what service the target is connected to.USABILITY: You can delete logs and view alerts for each process or action you run against each target.How to use itFirst unload the tool.git clone https://github.com/jofpin/trape.gitcd trapepython trape.py -hIf it does not work, try to install all the libraries that are located in the file requirements.txtpip install -r requirements.txtExample of executionExample: python trape.py –url http://example.com –port 8080HELP AND OPTIONSuser:~$ python trape.py –helpusage: python trape.py -u <> -p <> [-h] [-v] [-u URL] [-p PORT] [-ak ACCESSKEY] [-l LOCAL] [–update] [-n] [-ic INJC]optional arguments: -h, –help show this help message and exit -v, –version show program’s version number and exit -u URL, –url URL Put the web page url to clone -p PORT, –port PORT Insert your port -ak ACCESSKEY, –accesskey ACCESSKEY Insert your custom key access -l LOCAL, –local LOCAL Insert your home file -n, –ngrok Insert your ngrok Authtoken -ic INJC, –injectcode INJC Insert your custom REST API path -ud UPDATE, –update UPDATE Update trape to the latest version–url In this option you add the URL you use to clone Live, which works as a decoy.–port Here you insert the port, where you are going to run the trape server.–accesskey You enter a custom key for the trape panel, if you do not insert it will generate an automatic key.–injectcode trape contains a REST API to play anywhere, using this option you can customize the name of the file to include, if it does not, generates a random name allusive to a token.–local Using this option you can call a local HTML file, this is the replacement of the –url option made to run a local lure in trape.–ngrok In this option you can enter a token, to run at the time of a process. This would replace the token saved in configurations.–version You can see the version number of trape.–update Option especially to upgrade to the latest version of trape.–help It is used to see all the above options, from the executable.DisclaimerThis tool has been published educational purposes in order to teach people how bad guys could track them or monitor them or obtain information from their credentials, we are not responsible for the use or the scope that may have the People through this project.We are totally convinced that if we teach how vulnerable things are, we can make the Internet a safer place.DeveloperThis development and others, the participants will be mentioned with name, Twitter and charge. CREATOR— Jose Pino – @jofpin – (Security Researcher)Download Trape v2.0

Link: http://www.kitploit.com/2018/11/trape-v20-people-tracker-on-internet.html