Probequest – Toolkit For Playing With Wi-Fi Probe Requests

Toolkit allowing to sniff and display the Wi-Fi probe requests passing near your wireless interface.Probe requests are sent by a station to elicit information about access points, in particular to determine if an access point is present or not in the nearby environment. Some devices (mostly smartphones and tablets) use these requests to determine if one of the networks they have previously been connected to is in range, leaking personal information.Further details are discussed in this paper. Installationpip3 install –upgrade probequest DocumentationThe project is documented here.UsageEnabling the monitor modeTo be able to sniff the probe requests, your Wi-Fi network interface must be set to monitor mode.With ifconfig and iwconfigsudo ifconfig downsudo iwconfig <wireless interface> mode monitorsudo ifconfig <wireless interface> upFor example:sudo ifconfig wlan0 downsudo iwconfig wlan0 mode monitorsudo ifconfig wlan0 upWith airmon-ng from aircrack-ngTo kill all the interfering processes:sudo airmon-ng check killTo enable the monitor mode:sudo airmon-ng start <wireless interface>For example:sudo airmon-ng start wlan0Command line argumentsToolkit for Playing with Wi-Fi Probe Requestsusage: probequest [-h] [–debug] -i INTERFACE [–ignore-case] [–mode {RAW,TUI}] [-o OUTPUT] [–version] [-e ESSID [ESSID …] | -r REGEX] [–exclude EXCLUDE [EXCLUDE …] | -s STATION [STATION …]]Named Arguments –debugdebug modeDefault: False-i, –interfacewireless interface to use (must be in monitor mode)–ignore-caseignore case distinctions in the regex pattern (default: false)Default: False–modePossible choices: RAW, TUIset the mode to useDefault: RAW-o, –outputoutput file to save the captured data (CSV format)–versionshow program’s version number and exit-e, –essidESSID of the APs to filter (space-separated list)-r, –regexregex to filter the ESSIDs–excludeMAC addresses of the stations to exclude (space-separated list)-s, –stationMAC addresses of the stations to filter (space-separated list)Example of usesudo probequest -i wlan0Download Probequest

Link: http://feedproxy.google.com/~r/PentestTools/~3/fpE9V2W2e84/probequest-toolkit-for-playing-with-wi.html

Takeover – SubDomain TakeOver Vulnerability Scanner

Sub-domain takeover vulnerability occur when a sub-domain (subdomain.example.com) is pointing to a service (e.g: GitHub, AWS/S3,..) that has been removed or deleted. This allows an attacker to set up a page on the service that was being used and point their page to that sub-domain. For example, if subdomain.example.com was pointing to a GitHub page and the user decided to delete their GitHub page, an attacker can now create a GitHub page, add a CNAME file containing subdomain.example.com, and claim subdomain.example.com. For more information: hereInstallation:# git clone https://github.com/m4ll0k/takeover.git# cd takeover# python takeover.pyor:wget -q https://raw.githubusercontent.com/m4ll0k/takeover/master/takeover.py && python takeover.pyDownload Takeover

Link: http://feedproxy.google.com/~r/PentestTools/~3/bCpPqZo0iAg/takeover-subdomain-takeover.html

Rastrea2R – Collecting &Amp; Hunting For IOCs With Gusto And Style

Ever wanted to turn your AV console into an Incident Response & Threat Hunting machine? Rastrea2r (pronounced “rastreador" – hunter- in Spanish) is a multi-platform open source tool that allows incident responders and SOC analysts to triage suspect systems and hunt for Indicators of Compromise (IOCs) across thousands of endpoints in minutes. To parse and collect artifacts of interest from remote systems (including memory dumps), rastrea2r can execute sysinternal, system commands and other 3rd party tools across multiples endpoints, saving the output to a centralized share for automated or manual analysis. By using a client/server RESTful API, rastrea2r can also hunt for IOCs on disk and memory across multiple systems using YARA rules. As a command line tool, rastrea2r can be easily integrated within McAfee ePO, as well as other AV consoles and orchestration tools, allowing incident responders and SOC analysts to collect forensic evidence and hunt for IOCs without the need for an additional agent, with ‘gusto’ and style! DependenciesPython 2.7.xgitbottlerequestsyara-python QuickstartClone the project to your local directory (or download the zip file of the project)$git clone https://github.com/rastrea2r/rastrea2r.git$cd rastrea2rAll the dependencies necessary for the tool to run can be installed within a virtual environment via the provided makefile.$make helphelp – display this makefile’s help informationvenv – create a virtual environment for developmentclean – clean all files using .gitignore rulesscrub – clean all files, even untracked filestest – run teststest-verbose – run tests [verbosely]check-coverage – perform test coverage checkscheck-style – perform pep8 checkfix-style – perform check with autopep8 fixesdocs – generate project documentationcheck-docs – quick check docs consistencyserve-docs – serve project html documentationdist – create a wheel distribution packagedist-test – test a wheel distribution packagedist-upload – upload a wheel distribution packageCreate a virtual environment with all dependencies$make venv//Upon successful creation of the virtualenvironment, enter the virtualenvironment as instructed, for ex:$source /Users/ssbhat/.venvs/rastrea2r/bin/activateStart the rastrea2r server by going to $PROJECT_HOME/src/rastrea2r/server folder$cd src/rastrea2r/server/$python rastrea2r_server_v0.3.pyBottle v0.12.13 server starting up (using WSGIRefServer())…Listening on http://0.0.0.0:8080/Now execute the client program, depending on which platform you are trying to scan choose the target python script appropriately. Currently Windows, Linux and Mac platforms are supported.$python rastrea2r_osx_v0.3.py -husage: rastrea2r_osx_v0.3.py [-h] [-v] {yara-disk,yara-mem,triage} …Rastrea2r RESTful remote Yara/Triage tool for Incident Responderspositional arguments: {yara-disk,yara-mem,triage}modes of operation yara-disk Yara scan for file/directory objects on disk yara-mem Yara scan for running processes in memory triage Collect triage information from endpointoptional arguments: -h, –help show this help message and exit -v, –version show program’s version number and exitFurther more, the available options under each command can be viewed by executing the help option. i,e$python rastrea2r_osx_v0.3.py yara-disk -husage: rastrea2r_osx_v0.3.py yara-disk [-h] [-s] path server rulepositional arguments:path File or directory path to scanserver rastrea2r REST serverrule Yara rule on REST serveroptional arguments:-h, –help show this help message and exit-s, –silent Suppresses standard outputFor ex, on a Mac or Unix system you would do:$cd src/rastrea2r/osx/$python rastrea2r_osx_v0.3.py yara-disk /opt http://127.0.0.1:8080/ test.yar Executing rastrea2r on WindowsApart from the libraries specified in requirements.txt, we need to install the following libraries PSutil for win64: https://github.com/giampaolo/psutilWMI for win32: https://pypi.python.org/pypi/WMI/Requests: pip install requestsCompiling rastrea2rMake sure you have all the dependencies installed for the binary you are going to build on your Windows box. Then install:Pywin32: http://sourceforge.net/projects/pywin32/files/ ** Windows onlyPyinstaller: https://github.com/pyinstaller/pyinstaller/wiki Currently Supported functionalityyara-disk: Yara scan for file/directory objects on diskyara-mem: Yara scan for running processes in memorymemdump: Acquires a memory dump from the endpoint ** Windows onlytriage: Collects triage information from the endpoint ** Windows only NotesFor memdump and triage modules, SMB shares must be set up in this specific way:Binaries (sysinternals, batch files and others) must be located in a shared folder called TOOLS (read only) \path-to-share-foldertoolsOutput is sent to a shared folder called DATA (write only) \path-to-share-folderdataFor yara-mem and yara-disk scans, the yara rules must be in the same directory where the server is executed from. The RESTful API server stores data received in a file called results.txt in the same directory. Contributing to rastrea2r projectThe Developer Documentation provides complete information on how to contribute to rastrea2r project Demo videos on YoutubeVideo 1: Incident Response / Triage with rastrea2r on the command line – https://youtu.be/uFIZxqWeSyQVideo 2: Remote Yara scans with rastrea2r on the command line – https://youtu.be/cnY1yEslirwVideo 3: Using rastrea2r with McAfee ePO – Client Tasks & Execution – https://youtu.be/jB17uLtu45Y Presentationsrastrea2r at BlackHat Arsenal 2016 (check PDF for documentation on usage and examples) https://www.blackhat.com/us-16/arsenal.html#rastrea2rhttps://github.com/aboutsecurity/Talks-and-Presentations/blob/master/Ismael_Valenzuela-Hunting_for_IOCs_rastrea2r-BH_Arsenal_2016.pdf Recording of talk on rastrea2r at the SANS Threat Hunting Summit 2016https://www.youtube.com/watch?v=0PvBsL6KKfA&feature=youtu.be&a Credits & ReferencesTo Robert Gresham Jr. (@rwgresham) and Ryan O’Connor (@_remixed) for their contributions to the Triage module. Thanks folks!To Ricardo Dias for the idea of using a REST server and his great paper on how to use Python and Yara with McAfee ePO: http://www.sans.org/reading-room/whitepapers/forensics/intelligence-driven-incident-response-yara-35542Download Rastrea2R

Link: http://feedproxy.google.com/~r/PentestTools/~3/dD0nCbbILCw/rastrea2r-collecting-hunting-for-iocs.html

Omnibus – Open Source Intelligence Collection, Research, And Artifact Management

An Omnibus is defined as a volume containing several novels or other items previously published separately and that is exactly what the InQuest Omnibus project intends to be for Open Source Intelligence collection, research, and artifact management.By providing an easy to use interactive command line application, users are able to create sessions to investigate various artifacts such as IP addresses, domain names, email addresses, usernames, file hashes, Bitcoin addresses, and more as we continue to expand.This project has taken motivation from the greats that came before it such as SpiderFoot, Harpoon, and DataSploit. Much thanks to those great authors for contributing to the world of open source.The application is written with Python 2.7 in mind and has been successfully tested on OSX and Ubuntu 16.04 environments.As this is a pre-release of the final application, there will very likely be some bugs and uncaught exceptions or other weirdness during usage. Though for the most part, it is fully functional and can be used to begin OSINT investigations right away.VocabularyBefore we begin we’ll need to cover some terminology used by Omnibus.Artifact:An item to investigateArtificats can be created in two ways:Using the new command or by being discoverd through module executionSession:Cache of artifacts created after starting the Omnibus CLIEach artifact in a session is given an ID to quickly identify and retrieve the artifact from the cacheCommands can be executed against an artifact either by providing it’s name or it’s corresponding session IDModule:Python script that performs some arbitirary OSINT task against an artifactRunning OmnibusStarting up Omnibus for investigation is a simple as cloning this GitHub repository, installing the Python requirements using pip install -r requirements.txt and running python2.7 omnibus-cli.py. Omnibus Shell – Main StartupFor a visual reference of the CLI, pictured above is the Omnibus console after a new session has been started, 2 artifacts have been added to a session, and the help menu is shown.API KeysYou must set any API keys you’d like to use within modules inside the omnibus/etc/apikeys.json file. This file is a JSON ocument with placeholders for all the services which require API keys, and is only accessed by Omnibus on a per module basis to retrieve the exact API key a module needs to execute.It should be noted that most of the services requiring API keys have free accounts and API keys. Some free accounts may have lower resource limits, but that hasn’t been a problem during smaller daily investigations or testing the application.A handy tip: Use the cat apikeys command to view which keys you do in fact have stored. If modules are failing, check here first to ensure your API key is properly saved.Interactive ConsoleWhen you first run the CLI, you’ll be greeted by a help menu with some basic information. We tried to build the command line script to mimic some common Linux console commands for ease of use. Omnibus provides commands such as cat to show information about an artifact, rm to remove an artifact from the database, ls to view currently session artifacts, and so on.One additional feature of note is the use of the > character for output redirection. For example, if you wish to retrieve the details of an artifact named “inquest.net" saved to a JSON file on your local disk you’d simply run the command: cat inquest.net > inquest-report.json and there it would be! This feature also works with full file paths instead of relative paths.The high level commands you really need to know to use Omnibus are:session start a new sessionnew create a new artifact for investigationmodules display list of available modulesopen <file path> load a text file list of artifacts into Omnibus as artifactscat <artifact name | session id> view beautified JSON database recordsls show all active artifactsrm remove an artifact from the databasewipe clear the current artifact sessionAlso, if you ever need a quick reference on the different commands available for different areas of the application there are sub-help menus for this exact purpose. Using these commands will show you only those commands available relevant to a specific area:general overall commands such as help, history, quit, set, clear, banner, etc.artifacts display commands specific to artifacts and their managementsessions display helpful commands around managing sessionsmodules show a list of all available modulesArtifactsOverviewMost cyber investigations begin with one or more technical indicators, such as an IP address, file hash or email address. After searching and analyzing, relationships begin to form and you can pivot through connected data points. These data points are called Artifacts within Omnibus and represent any item you wish to investigate.Artifacts can be one of the following types:IPv4 addressFQDNEmail AddressBitcoin AddressFile Hash (MD5, SHA1, SHA256, SHA512)User NameCreating & Managing ArtifactsThe command "new" followed by an artifact will create that artifact within your Omnibus session and store a record of the artifact within MongoDB. This record holds the artifact name, type, subtype, module results, source, notes, tags, children information (as needed) and time of creation. Every time you run a module against a created or stored artifact, the database document will be updated to reflect the newly discovered information.To create a new artifact and add it to MongoDB for tracking, run the command new <artifact name>. For example, to start investigation the domain deadbits.org, you would run new deadbits.org.Omnibus will automatically determine what type the artifact is and ensure that only modules for that type are executed against the artifact.When a module is created, new artifacts may be found during the discovery process. For example, running the "dnsresolve" command might find new IPv4 addresses not previously seen by Omnibus. If this is the case, those newly found artifacts are automatically created as new artifacts in Omnibus and linked to their parent with an additional field called "source" to identify from which module they were originally found.Artifacts can be removed from the database using the "delete" command. If you no longer need an artifact, simply run the delete command and specify the artifacts name or the session ID if it has one.SessionsOmnibus makes use of a feature called "sessions". Sessions are temporary caches created via Redis each time you start a CLI session. Every time you create an artifact, that artifacts name is added to the Session along with a numeric key that makes for easy retrieval, searching, and action against the related artifact. For example if you’re session held one item of "inquest.net", instead of needing to execute virustotal inquest.net you could also run virustotal 1 and you would receive the same results. In fact, this works against any module or command that uses an artiface name as it’s first argument.Sessions are here for easy access to artifacts and will be cleared each time you quit the command line session. If you wish to clear the session early, run the command "wipe" and you’ll get a clean slate.Eventually, we would like to add a Cases portion to Omnibus that allows users to create cases of artifacts, move between them, and maintain a more coherent OSINT management platform. Though for this current pre-release, we will be sticking with the Session. 🙂 Interacting with Session IDs instead of Artifact names ModulesOmnibus currently supports the following list of modules. If you have suggestions or modules or would like to write one of your own, please create a pull request.Also, within the Omnibus console, typing the module name will show you the Help information associated with that module.ModulesBlockchain.infoCensysClearBitCymonDNS subdomain enumerationDNS resolutionDShield (SANS ISC)GeoIP lookupFull ContactGist ScrapingGitHub user searchHackedEmails.com email searchHurricane Electric host searchHIBP searchHunter.ioIPInfoIPVoidKeyBaseNmapPassiveTotalPastebinPGP Email and Name lookupRSS Feed ReaderShodanSecurity News ReaderThreatCrowdThreatExpertTotalHashTwitterURLVoidVirusTotalWeb ReconWHOISAs these modules are a work in progress, some may not yet work as expected but this will change over the coming weeks as we hope to officially release version 1.0 to the world!MachinesMachines are a simple way to run all available modules for an artifact type against a given artifact. This is a fast way if you want to gather as much information on a target as possible using a single command.To perform this, simply run the command machine <artifact name|session ID> and wait a few minutes until the modules are finished executing.The only caveat is that this may return a large volume of data and child artifacts depending on the artifact type and the results per module. To remedy this, we are investigating a way to remove specific artifact fields from the stored database document to make it easier for users to prune unwanted data.Quick Reference GuideSome quick commands to remember are:session – start a new artifact cachecat <artifact name>|apikeys – pretty-print an artifacts document or view your stored API keysopen <file path> – load a text file list of artifacts into Omnibus for investigationnew <artifact name> – create a new artifact and add it to MongoDB and your sessionfind <artifact name> – check if an artifact exists in the db and show the resultsReportingReports are the JSON output of an artifacts database document, essentially a text file version of the output of the "cat" command. But by using the report command you may specify an artifact and a filepath you wish to save the output to:omnibus >> report inquest.net /home/adam/intel/osint/reports/inq_report.jsonThis above command overrides the standard report directory of omnibus/reports. By default, and if you do not specify a report path, all reports will be saved to that location. Also, if you do not specify a file name the report will use the following format:[artifact_name]_[timestamp].jsonRedirectionThe output of commands can also be saved to arbitrary text files using the standard Linux character >. For example, if you wish to store the output of a VirusTotal lookup for a host to a file called "vt-lookup.json" you would simply execute:virustotal inquest.net > vt-lookup.jsonBy default the redirected output files are saved in the current working directory, therefore "omnibus/", but if you specify a full path such as virustotal inquest.net > /home/adam/intel/cases/001/vt-lookup.json the JSON formatted output will be saved there.Monitoring ModulesOmnibus will soon be offering the ability to monitor specific keywords and regex patterns across different sources. Once a match is found, an email or text message alert could be sent to the user to inform them on the discovery. This could be leveraged for real-time threat tracking, identifying when threat actors appear on new forums or make a fresh Pastebin post, or simply to stay on top of the current news.Coming monitors include:RSS monitorPastebin monitorGeneric Pastesite monitoringGeneric HTTP/JSON monitoringDownload Omnibus

Link: http://feedproxy.google.com/~r/PentestTools/~3/oqafc7KT-OM/omnibus-open-source-intelligence.html

Hash-Buster v2.0 – Tool Which Uses Several APIs To Perform Hash Lookups

FeaturesAutomatic hash type identificationSupports MD5, SHA1, SHA2Can extract & crack hashes from a fileCan find hashes from a directory, recursively6 robust APIsAs powerful as Hulk, as intelligent as Bruce BannerSingle HashYou don’t need to specify the hash type. Hash Buster will identify and crack it under 3 seconds.Cracking hashes from a fileHash Buster can find your hashes even if they are stored in a file like thissimple@gmail.com:21232f297a57a5a743894a0e4a801fc3{“json@gmail.com":"d033e22ae348aeb5660fc2140aec35850c4da997"}surrondedbytext8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918surrondedbytextFinding hashes from a directoryYep, just specify a directory and Hash Buster will go through all the files and directories present in it, looking for hashes.Insallation & UsageYou can install Hash-Buster with the following command:make installCracking a single hashbuster -s Cracking hashes from a filebuster -f /root/hashes.txtFinding hashes from a directorybuster -d /root/DocumentsNote: Please don’t add / at the end of the directoryDownload Hash-Buster

Link: http://feedproxy.google.com/~r/PentestTools/~3/qAMbMLPSE9g/hash-buster-v20-tool-which-uses-several.html

Msploitego – Pentesting Suite For Maltego Based On Data In A Metasploit Database

msploitego leverages the data gathered in a Metasploit database by enumerating and creating specific entities for services. Services like samba, smtp, snmp, http have transforms to enumerate even further. Entities can either be loaded from a Metasploit XML file or taken directly from the Postgres msf database.RequirementsPython 2.7Has only been tested on Kali Linuxsoftware installations:Metasploitnmapenum4linuxsmtp-checkniktoInstallationcheckout and update the transform path inside MaltegoIn Maltego import config from msploitego/src/msploitego/resources/maltego/msploitego.mtzGeneral UseUsing exported Metasploit xml filerun a db_nmap scan in metatasploit, or import a previous scanmsf> db_nmap -vvvv -T5 -A -sS -ST -Pnmsf> db_import /path/to/your/nmapfile.xmlexport the database to an xml filemsf> db_export -f xml /path/to/your/output.xmlIn Maltego drag a MetasploitDBXML entity onto the graph.Update the entity with the path to your metasploit database file.run the MetasploitDB transform to enumerate hosts.from there several transforms are available to enumerate services, vulnerabilities stored in the metasploit DBUsing Postgresdrag and drop a Postgresql DB entity onto the canvas, enter DB details.run the Postgresql transforms directly against a running DBNotesInstead of running a nikto scan directly from Maltego, I’ve opted to include a field to for a Nikto XML file. Nikto can take long time to run so best to manage that directly from the os.ScreenshotsTODO’sConnect directly to the postgres database – in progressMuch, much, much more tranforms for actions on generated entities.Download Msploitego

Link: http://feedproxy.google.com/~r/PentestTools/~3/NL3Bxk8kM2s/msploitego-pentesting-suite-for-maltego.html

AutoSQLi – An Automatic SQL Injection Tool Which Takes Advantage Of Googler, Ddgr, WhatWaf And SQLMap

An Automatic SQL Injection Tool Which Takes Advantage Of ~DorkNet~ Googler, Ddgr, WhatWaf And Sqlmap.FeaturesSave System – there is a complete save system, which can resume even when your pc crashed. – technology is coolDorking – from the command line ( one dork ): YES – from a file: NO – from an interactive wizard: YESWaffing – Thanks to Ekultek, WhatWaf now has a JSON output function. – So it’s mostly finished 🙂 – UPDATE: WhatWaf is completly working with AutoSQLi. Sqlmap is the next big stepSqlmapping – I’ll look if there is some sort of sqlmap API, because I don’t wanna use execute this time (: – Sqlmap is coolREPORTING: YESRest API: NOPETODO: Log handling (logging with different levels, cleanly) Translate output (option to translate the save, which is in pickle format, to a json/csv save) Spellcheck (correct wrongly spelled words and conjugational errors. I’m on Neovim right now and there is no auto-spelling check)The PlanThis plan is a bit outdated, but it will follow this ideaAutoSQLi will be a python application which will, automatically, using a dork provided by the user, return a list of websites vulnerable to a SQL injection.To find vulnerable websites, the users firstly provide a dork DOrking, which is passed to findDorks.py, which returns a list of URLs corresponding to it.Then, AutoSQLi will do some very basic checks ( TODO: MAYBE USING SQLMAP AND IT’s –smart and –batch function ) to verify if the application is protected by a Waf, or if one of it’s parameters is vulnerable.Sometimes, websites are protected by a Web Application Firewall, or in short, a WAF. To identify and get around of these WAFs, AutoSQLi will use WhatWaf.Finally, AutoSQLi will exploit the website using sqlmap, and give the choice to do whatever he wants !TorAlso, AutoSQLi should work using Tor by default. So it should check for tor availiability on startup.Download AutoSQLi

Link: http://feedproxy.google.com/~r/PentestTools/~3/7N92LhalNtc/autosqli-automatic-sql-injection-tool.html

Wifite 2.1.0 – Automated Wireless Attack Tool

A complete re-write of wifite, a Python script for auditing wireless networks.Wifite runs existing wireless-auditing tools for you. Stop memorizing command arguments & switches!What’s new in Wifite2?Less bugs Cleaner process management. Does not leave processes running in the background (the old wifite was bad about this).No longer “one monolithic script". Has working unit tests. Pull requests are less-painful!Speed Target access points are refreshed every second instead of every 5 seconds.Accuracy Displays realtime Power level of currently-attacked target.Displays more information during an attack (e.g. % during WEP chopchop attacks, Pixie-Dust step index, etc)Educational The –verbose option (expandable to -vv or -vvv) shows which commands are executed & the output of those commands.This can help debug why Wifite is not working for you. Or so you can learn how these tools are used.Actively developed (as of March 2018).Python 3 support.Sweet new ASCII banner.What’s gone in Wifite2?No more WPS PIN attack, because it can take days on-average.However, the Pixie-Dust attack is still an option.Some command-line arguments (–wept, –wpst, and other confusing switches).You can still access some of these, try ./Wifite.py -h -vWhat’s not new?(Mostly) Backwards compatibile with the original wifite’s arguments.Same text-based interface everyone knows and loves.Brief Feature ListReaver (or -bully) Pixie-Dust attack (enabled by-default, force with: –wps-only)WPA handshake capture (enabled by-default, force with: –no-wps)Validates handshakes against pyrit, tshark, cowpatty, and aircrack-ng (when available)Various WEP attacks (replay, chopchop, fragment, hirte, p0841, caffe-latte)Automatically decloaks hidden access points while scanning or attacking.Note: Only works when channel is fixed. Use the -c switch.Disable this via –no-deauths switch5Ghz support for some wireless cards (via -5 switch).Note: Some tools don’t play well on 5GHz channels (e.g. aireplay-ng)Stores cracked passwords and handshakes to the current directory (–cracked)Includes metadata about the access point.Provides commands to crack captured WPA handshakes (–crack)Includes all commands needed to crack using aircrack-ng, john, hashcat, or pyrit.Linux Distribution SupportWifite2 is designed specifically for the latest version of Kali’s rolling release (tested on Kali 2017.2, updated Jan 2018).Other pen-testing distributions (such as BackBox) have outdated versions of the tools used by Wifite; these distributions are not supported.Required ToolsOnly the latest versions of these programs are supported:Required:iwconfig: For identifying wireless devices already in Monitor Mode.ifconfig: For starting/stopping wireless devices.Aircrack-ng suite, includes:aircrack-ng: For cracking WEP .cap files and and WPA handshake captures.aireplay-ng: For deauthing access points, replaying capture files, various WEP attacks.airmon-ng: For enumerating and enabling Monitor Mode on wireless devices.airodump-ng: For target scanning & capture file generation.packetforge-ng: For forging capture files.Optional, but Recommended:tshark: For detecting WPS networks and inspecting handshake capture files.reaver: For WPS Pixie-Dust attacks.Note: Reaver’s wash tool can be used to detect WPS networks if tshark is not found.bully: For WPS Pixie-Dust attacks.Alternative to Reaver. Specify –bully to use Bully instead of Reaver.Bully is also used to fetch PSK if reaver cannot after cracking WPS PIN.cowpatty: For detecting handshake captures.pyrit: For detecting handshake captures.Installing & Runninggit clone https://github.com/derv82/wifite2.gitcd wifite2./Wifite.pyScreenshotsCracking WPS PIN using reaver’s Pixie-Dust attack, then retrieving WPA PSK using bully: Decloaking & cracking a hidden access point (via the WPA Handshake attack):Cracking a weak WEP password (using the WEP Replay attack):Download Wifite2

Link: http://feedproxy.google.com/~r/PentestTools/~3/3Y8Df4kTCFM/wifite-210-automated-wireless-attack.html

DumpsterDiver – Tool To Search Secrets In Various Filetypes

DumpsterDiver is a tool used to analyze big volumes of various file types in search of hardcoded secret keys (e.g. AWS Access Key, Azure Share Key or SSH keys). Additionally, it allows creating a simple search rules with basic conditions (e.g. reports only csv file including at least 10 email addresses). The main idea of this tool is to detect any potential secret leaks.Key features:it uses Shannon Entropy to find private keys.it supports multiprocessing for analyzing files.it unpacks compressed archives (e.g. zip, tar.gz etc.)it supports advanced search using simple rules (details below)Usageusage: DumpsterDiver.py [-h] -p LOCAL_PATH [-r] [-a]Command line options-p LOCAL_PATH – path to the folder containing files to be analyzed.-r, –remove – when this flag is set, then files which don’t contain any secret (or anything interesting if -a flag is set) will be removed.-a, –advance – when this flag is set, then all files will be additionally analyzed using rules specified in ‘rules.yaml’ file.Pre-requisitesTo run the DumpsterDiver you have to install python libraries. You can do this by running the following command:pip install -r requirements.txtUnderstanding config.yaml fileThere is no single tool which fits for everyone’s needs and the DumpsterDiver is not an exception here. So, in config.yaml file you can custom the program to search exactly what you want. Below you can find a description of each setting.logfile – specifies a file where logs should be saved.excluded – specifies file extensions which you don’t want to omit during a scan. There is no point in searching for hardcoded secrets in picture or video files, right?min_key_length and min_key_length – specifies minimum and maximum length of the secret you’re looking for. Depending on your needs this setting can greatly limit the amount of false positives. For example, the AWS secret has a length of 40 bytes so if you set min_key_length and min_key_length to 40 then the DumpsterDiver will analyze only 40 bytes strings. However, it won’t take into account longer strings like Azure shared key or private SSH key.Advanced search:The DumpsterDiver supports also an advanced search. Beyond a simple grepping with wildcards this tool allows you to create conditions. Let’s assume you’re searching for a leak of corporate emails. Additionaly, you’re interested only in a big leaks, which contain at least 100 email addresses. For this purpose you should edit a ‘rules.yaml’ file in the following way:filetype: [“.*"]filetype_weight: 0grep_words: ["*@example.com"]grep_words_weight: 10grep_word_occurrence: 100Let’s assume a different scenario, you’re looking for terms "pass", "password", "haslo", "hasło" (if you’re analyzing polish company repository) in a .db or .sql file. Then you can achieve this by modifying a ‘rules.yaml’ file in the following way:filetype: [".db", ".sql"]filetype_weight: 5grep_words: ["*pass*", "*haslo*", "*hasÅ‚o*"]grep_words_weight: 5grep_word_occurrence: 1Note that the rule will be triggered only when the total weight (filetype_weight + grep_words_weight) is >=10.Download DumpsterDiver

Link: http://feedproxy.google.com/~r/PentestTools/~3/uFdrDBkQmpw/dumpsterdiver-tool-to-search-secrets-in.html