GitMiner v2.0 – Tool For Advanced Mining For Content On Github

Advanced search tool and automation in Github. This tool aims to facilitate research by code or code snippets on github through the site’s search page.MOTIVATIONDemonstrates the fragility of trust in public repositories to store codes with sensitive information.REQUIREMENTSlxmlrequestsargparsejsonreINSTALLgit clone http://github.com/UnkL4b/GitMinersudo apt-get install python-requests python-lxml ORpip install -r requirements.txtDockergit clone http://github.com/UnkL4b/GitMinercd GitMinerdocker build -t gitminer .docker run -it gitminer -hHELP UnkL4b __ Automatic search for Github((OO)) ▄████ ██▓▄▄▄█████▓ ███▄ ▄███▓ ██▓ ███▄ █ ▓█████ ██▀███ \__/ ██▒ ▀█▒▓██▒▓ ██▒ ▓▒▓██▒▀█▀ ██▒▓██▒ ██ ▀█ █ ▓█ ▀ ▓██ ▒ ██▒ OO |^| ▒██░▄▄▄░▒██▒▒ ▓██░ ▒░▓██ ▓██░▒██▒▓██ ▀█ ██▒▒███ ▓██ ░▄█ ▒ oOo | | ░▓█ ██▓░██░░ ▓██▓ ░ ▒██ ▒██ ░██░▓██▒ ▐▌██▒▒▓█ ▄ ▒██▀▀█▄ OoO | | ░▒▓███▀▒░██░ ▒██▒ ░ ▒██▒ ░██▒░██░▒██░ ▓██░░▒████▒░██▓ ▒██▒ /oOo | |___░▒___▒_░▓____▒_░░___░_▒░___░__░░▓__░_▒░___▒_▒_░░_▒░_░░_▒▓_░▒▓░_/ / \______░___░__▒_░____░____░__░______░_▒_░░_░░___░_▒░_░_░__░__░▒_░_▒░__/ v2.0 ░ ░ ░ ▒ ░ ░ ░ ░ ▒ ░ ░ ░ ░ ░ ░░ ░ ░ ░ ░ ░ ░ ░ ░ ░ -> github.com/UnkL4b -> unkl4b.github.io +———————[WARNING]———————+ | DEVELOPERS ASSUME NO LIABILITY AND ARE NOT | | RESPONSIBLE FOR ANY MISUSE OR DAMAGE CAUSED BY | | THIS PROGRAM | +—————————————————+ [-h] [-q ‘filename:shadow path:etc’] [-m wordpress] [-o result.txt] [-r ‘/^\s*.*?;?\s*$/gm’] [-c _octo=GH1.1.2098292984896.153133829439; _ga=GA1.2.36424941.153192375318; user_session=oZIxL2_ajeDplJSndfl37ddaLAEsR2l7myXiiI53STrfhqnaN; __Host-user_session_same_site=oXZxv9_ajeDplV0gAEsmyXiiI53STrfhDN; logged_in=yes; dotcom_user=unkl4b; tz=America%2FSao_Paulo; has_recent_activity=1; _gh_sess=MmxxOXBKQ1RId3NOVGpGcG54aEVnT1o0dGhxdGdzWVpySnFRd1dVYUk5TFZpZXFuTWxOdW1FK1IyM0pONjlzQWtZM2xtaFR3ZDdxlGMCsrWnBIdnhUN0tjVUtMYU1GeG5Pbm5DMThuWUFETnZjcllGOUNkRGUwNUtKOVJTaGR5eUJYamhWRE5XRnMWZZN3Y3dlpFNDZXL1NWUEN4c093RFhQd3RJQ1NBdmhrVDE3VVNiUFF3dHBycC9FeDZ3cFVXV0ZBdXZieUY5WDRlOE9ZSG5sNmRHUmllcmk0Up1MTcyTXZrN1RHYmJSdz09–434afdd652b37745f995ab55fc83]optional arguments: -h, –help show this help message and exit -q ‘filename:shadow path:etc’, –query ‘filename:shadow path:etc’ Specify search term -m wordpress, –module wordpress Specify the search module -o result.txt, –output result.txt Specify the output file where it will be saved -r ‘/^\s*(.*?);?\s*$/gm’, –regex ‘/^\s*(.*?);?\s*$/gm’ Set regex to search in file -c _octo=GH1.1.2098292984896.153133829439; _ga=GA1.2.36424941.153192375318; user_session=oZIxL2_ajeDplJSndfl37ddaLAEsR2l7myXiiI53STrfhqnaN; __Host-user_session_same_site=oXZxv9_ajeDplV0gAEsmyXiiI53STrfhDN; logged_in=yes; dotcom_user=unkl4b; tz=America%2FSao_Paulo; has_recent_activity=1; _gh_sess=MmxxOXBKQ1RId3NOVGpGcG54aEVnT1o0dGhxdGdzWVpySnFRd1dVYUk5TFZpZXFuTWxOdW1FK1IyM0pONjlzQWtZM2xtaFR3ZDdxlGMCsrWnBIdnhUN0tjVUtMYU1GeG5Pbm5DMThuWUFETnZjcllGOUNkRGUwNUtKOVJTaGR5eUJYamhWRE5XRnMWZZN3Y3dlpFNDZXL1NWUEN4c093RFhQd3RJQ1NBdmhrVDE3VVNiUFF3dHBycC9FeDZ3cFVXV0ZBdXZieUY5WDRlOE9ZSG5sNmRHUmllcmk0Up1MTcyTXZrN1RHYmJSdz09–434afdd652b37745f995ab55fc83, –cookie _octo=GH1.1.2098292984896.153133829439; _ga=GA1.2.36424941.153192375318; user_session=oZIxL2_ajeDplJSndfl37ddaLAEsR2l7myXiiI53STrfhqnaN; __Host-user_session_same_site=oXZxv9_ajeDplV0gAEsmyXiiI53STrfhDN; logged_in=yes; dotcom_user=unkl4b; tz=America%2FSao_Paulo; has_recent_activity=1; _gh_sess=MmxxOXBKQ1RId3NOVGpGcG54aEVnT1o0dGhxdGdzWVpySnFRd1dVYUk5TFZpZXFuTWxOdW1FK1IyM0pONjlzQWtZM2xtaFR3ZDdxlGMCsrWnBIdnhUN0tjVUtMYU1GeG5Pbm5DMThuWUFETnZjcllGOUNkRGUwNUtKOVJTaGR5eUJYamhWRE5XRnMWZZN3Y3dlpFNDZXL1NWUEN4c093RFhQd3RJQ1NBdmhrVDE3VVNiUFF3dHBycC9FeDZ3cFVXV0ZBdXZieUY5WDRlOE9ZSG5sNmRHUmllcmk0Up1MTcyTXZrN1RHYmJSdz09–434afdd652b37745f995ab55fc83 Specify the cookie for your githubEXAMPLESearching for wordpress configuration files with passwords:$:> python gitminer-v2.0.py -q ‘filename:wp-config extension:php FTP_HOST in:file ‘ -m wordpress -c pAAAhPOma9jEsXyLWZ-16RTTsGI8wDawbNs4 -o result.txtLooking for brasilian government files containing passwords:$:> python gitminer-v2.0.py –query ‘extension:php “root" in:file AND "gov.br" in:file’ -m senhas -c pAAAhPOma9jEsXyLWZ-16RTTsGI8wDawbNs4Looking for shadow files on the etc paste:$:> python gitminer-v2.0.py –query ‘filename:shadow path:etc’ -m root -c pAAAhPOma9jEsXyLWZ-16RTTsGI8wDawbNs4Searching for joomla configuration files with passwords:$:> python gitminer-v2.0.py –query ‘filename:configuration extension:php "public password" in:file’ -m joomla -c pAAAhPOma9jEsXyLWZ-16RTTsGI8wDawbNs4Hacking SSH ServersDork to searchby @techgaun (https://github.com/techgaun/github-dorks) Dork Description filename:.npmrc _auth npm registry authentication data filename:.dockercfg auth docker registry authentication data extension:pem private private keys extension:ppk private puttygen private keys filename:id_rsa or filename:id_dsa private ssh keys extension:sql mysql dump mysql dump extension:sql mysql dump password mysql dump look for password; you can try varieties filename:credentials aws_access_key_id might return false negatives with dummy values filename:.s3cfg might return false negatives with dummy values filename:wp-config.php wordpress config files filename:.htpasswd htpasswd files filename:.env DB_USERNAME NOT homestead laravel .env (CI, various ruby based frameworks too) filename:.env MAIL_HOST=smtp.gmail.com gmail smtp configuration (try different smtp services too) filename:.git-credentials git credentials store, add NOT username for more valid results PT_TOKEN language:bash pivotaltracker tokens filename:.bashrc password search for passwords, etc. in .bashrc (try with .bash_profile too) filename:.bashrc mailchimp variation of above (try more variations) filename:.bash_profile aws aws access and secret keys rds.amazonaws.com password Amazon RDS possible credentials extension:json api.forecast.io try variations, find api keys/secrets extension:json mongolab.com mongolab credentials in json configs extension:yaml mongolab.com mongolab credentials in yaml configs (try with yml) jsforce extension:js conn.login possible salesforce credentials in nodejs projects SF_USERNAME salesforce possible salesforce credentials filename:.tugboat NOT _tugboat Digital Ocean tugboat config HEROKU_API_KEY language:shell Heroku api keys HEROKU_API_KEY language:json Heroku api keys in json files filename:.netrc password netrc that possibly holds sensitive credentials filename:_netrc password netrc that possibly holds sensitive credentials filename:hub oauth_token hub config that stores github tokens filename:robomongo.json mongodb credentials file used by robomongo filename:filezilla.xml Pass filezilla config file with possible user/pass to ftp filename:recentservers.xml Pass filezilla config file with possible user/pass to ftp filename:config.json auths docker registry authentication data filename:idea14.key IntelliJ Idea 14 key, try variations for other versions filename:config irc_pass possible IRC config filename:connections.xml possible db connections configuration, try variations to be specific filename:express.conf path:.openshift openshift config, only email and server thou filename:.pgpass PostgreSQL file which can contain passwords filename:proftpdpasswd Usernames and passwords of proftpd created by cpanel filename:ventrilo_srv.ini Ventrilo configuration [WFClient] Password= extension:ica WinFrame-Client infos needed by users to connect toCitrix Application Servers filename:server.cfg rcon password Counter Strike RCON Passwords JEKYLL_GITHUB_TOKEN Github tokens used for jekyll filename:.bash_history Bash history file filename:.cshrc RC file for csh shell filename:.history history file (often used by many tools) filename:.sh_history korn shell history filename:sshd_config OpenSSH server config filename:dhcpd.conf DHCP service config filename:prod.exs NOT prod.secret.exs Phoenix prod configuration file filename:prod.secret.exs Phoenix prod secret filename:configuration.php JConfig password Joomla configuration file filename:config.php dbpasswd PHP application database password (e.g., phpBB forum software) path:sites databases password Drupal website database credentials shodan_api_key language:python Shodan API keys (try other languages too) filename:shadow path:etc Contains encrypted passwords and account information of new unix systems filename:passwd path:etc Contains user account information including encrypted passwords of traditional unix systems extension:avastlic Contains license keys for Avast! Antivirus extension:dbeaver-data-sources.xml DBeaver config containing MySQL Credentials filename:.esmtprc password esmtp configuration extension:json googleusercontent client_secret OAuth credentials for accessing Google APIs HOMEBREW_GITHUB_API_TOKEN language:shell Github token usually set by homebrew users xoxp OR xoxb Slack bot and private tokens .mlab.com password MLAB Hosted MongoDB Credentials filename:logins.json Firefox saved password collection (key3.db usually in same repo) filename:CCCam.cfg CCCam Server config file msg nickserv identify filename:config Possible IRC login passwords filename:settings.py SECRET_KEY Django secret keys (usually allows for session hijacking, RCE, etc) Download GitMiner

Link: http://feedproxy.google.com/~r/PentestTools/~3/VtATqnX-O4U/gitminer-v20-tool-for-advanced-mining.html

CloudSploit Scans – AWS Security Scanning Checks

CloudSploit scans is an open-source project designed to allow detection of security risks in an AWS account. These scripts are designed to run against an AWS account and return a series of potential misconfigurations and security risks.InstallationEnsure that NodeJS is installed. If not, install it from here.git clone git@github.com:cloudsploit/scans.gitnpm installSetupTo begin using the scanner, edit the index.js file with your AWS key, secret, and optionally (for temporary credentials), a session token. You can also set a file containing credentials. To determine the permissions associated with your credentials, see the permissions section below. In the list of plugins in the exports.js file, comment out any plugins you do not wish to run. You can also skip entire regions by modifying the skipRegions array.You can also set the typical environment variables expected by the aws sdks, namely AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_SESSION_TOKEN.Cross Account RolesWhen using the hosted scanner, you’ll need to create a cross-account IAM role. Cross-account roles enable you to share access to your account with another AWS account using the same policy model that you’re used to. The advantage is that cross-account roles are much more secure than key-based access, since an attacker who steals a cross-account role ARN still can’t make API calls unless they also infiltrate the authorized AWS account.To create a cross-account role:Navigate to the IAM console.Click “Roles" and then "Create New Role".Provide a role name (suggested "cloudsploit").Select the "Role for Cross-Account Access" radio button.Click the "Select" button next to "Allows IAM users from a 3rd party AWS account to access this account."Enter 057012691312 for the account ID (this is the ID of CloudSploit’s AWS account).Copy the auto-generated external ID from the CloudSploit web page and paste it into the AWS IAM console textbox.Ensure that "Require MFA" is not selected.Click "Next Step".Select the "Security Audit" policy. Then click "Next Step" again.Click through to create the role.PermissionsThe scans require read-only permissions to your account. This can be done by adding the "Security Audit" AWS managed policy to your IAM user or role.Security Audit Managed Policy (Recommended)To configure the managed policy:Open the IAM Console.Find your user or role.Click the "Permissions" tab.Under "Managed Policy", click "Attach policy".In the filter box, enter "Security Audit"Select the "Security Audit" policy and save.Inline Policy (Not Recommended)If you’d prefer to be more restrictive, the following IAM policy contains the exact permissions used by the scan.WARNING: This policy will likely change as more plugins are written. If a test returns "UNKNOWN" it is likely missing a required permission. The preferred method is to use the "Security Audit" policy.{ "Version": "2012-10-17", "Statement": [ { "Action": [ "cloudfront:ListDistributions", "cloudtrail:DescribeTrails", "configservice:DescribeConfigurationRecorders", "configservice:DescribeConfigurationRecorderStatus", "ec2:DescribeInstances", "ec2:DescribeSecurityGroups", "ec2:DescribeAccountAttributes", "ec2:DescribeAddresses", "ec2:DescribeVpcs", "ec2:DescribeFlowLogs", "ec2:DescribeSubnets", "elasticloadbalancing:DescribeLoadBalancerPolicies", "elasticloadbalancing:DescribeLoadBalancers", "iam:GenerateCredentialReport", "iam:ListServerCertificates", "iam:ListGroups", "iam:GetGroup", "iam:GetAccountPasswordPolicy", "iam:ListUsers", "iam:ListUserPolicies", "iam:ListAttachedUserPolicies", "kms:ListKeys", "kms:DescribeKey", "kms:GetKeyRotationStatus", "rds:DescribeDBInstances", "rds:DescribeDBClusters", "route53domains:ListDomains", "s3:GetBucketVersioning", "s3:GetBucketLogging", "s3:GetBucketAcl", "s3:ListBuckets", "ses:ListIdentities", "ses:getIdentityDkimAttributes" ], "Effect": "Allow", "Resource": "*" } ]}RunningTo run a standard scan, showing all outputs and results, simply run:node index.jsOptional PluginsSome plugins may require additional permissions not outlined above. Since their required IAM permissions are not included in the SecurityAudit managed policy, these plugins are not included in the exports.js file by default. To enable these plugins, uncomment them from the exports.js file, if applicable, add the policies required to an inline IAM policy, and re-run the scan.ComplianceCloudSploit also supports mapping of its plugins to particular compliance policies. To run the compliance scan, use the –compliance flag. For example:node index.js –compliance=hipaaCloudSploit currently supports the following compliance mappings:HIPAAHIPAA scans map CloudSploit plugins to the Health Insurance Portability and Accountability Act of 1996.ArchitectureCloudSploit works in two phases. First, it queries the AWS APIs for various metadata about your account. This is known as the "collection" phase. Once all the necessary data has been collected, the result is passed to the second phase – "scanning." The scan uses the collected data to search for potential misconfigurations, risks, and other security issues. These are then provided as output.Writing a PluginCollection PhaseTo write a plugin, you must understand what AWS API calls your scan makes. These must be added to the collect.js file. This file determines the AWS API calls and the order in which they are made. For example:CloudFront: { listDistributions: { property: ‘DistributionList’, secondProperty: ‘Items’ }},This declaration tells the CloudSploit collection engine to query the CloudFront service using the listDistributions call and then save the results returned under DistributionList.Items.The second section in collect.js is postcalls, which is an array of objects defining API calls that rely on other calls being returned first. For example, if you need to first query for all EC2 instances, and then loop through each instance and run a more detailed call, you would add the EC2:DescribeInstances call in the first calls section and then add the more detailed call in postCalls setting it to rely on the output of DescribeInstances.An example:getGroup: { reliesOnService: ‘iam’, reliesOnCall: ‘listGroups’, filterKey: ‘GroupName’, filterValue: ‘GroupName’},This section tells CloudSploit to wait until the IAM:listGroups call has been made, and then loop through the data that is returned. The filterKey tells CloudSploit the name of the key from the original response, while filterValue tells it which property to set in the getGroup call filter. For example: iam.getGroup({GroupName:abc}) where abc is the GroupName from the returned list. CloudSploit will loop through each response, re-invoking getGroup for each element.Scanning PhaseAfter the data has been collected, it is passed to the scanning engine when the results are analyzed for risks. Each plugin must export the following:Exports the following:title (string): a user-friendly title for the plugincategory (string): the AWS category (EC2, RDS, ELB, etc.)description (string): a description of what the plugin doesmore_info (string): a more detailed description of the risk being tested forlink (string): an AWS help URL describing the service or risk, preferably with mitigation methodsrecommended_action (string): what the user should do to mitigate the risk foundrun (function): a function that runs the test (see below)Accepts a collection object via the run function containing the full collection object obtained in the first phase.Calls back with the results and the data source.Result CodesEach test has a result code that is used to determine if the test was successful and its risk level. The following codes are used:0: OKAY: No risks1: WARN: The result represents a potential misconfiguration or issue but is not an immediate risk2: FAIL: The result presents an immediate risk to the security of the account3: UNKNOWN: The results could not be determined (API failure, wrong permissions, etc.)Tips for Writing PluginsMany security risks can be detected using the same API calls. To minimize the number of API calls being made, utilize the cache helper function to cache the results of an API call made in one test for future tests. For example, two plugins: "s3BucketPolicies" and "s3BucketPreventDelete" both call APIs to list every S3 bucket. These can be combined into a single plugin "s3Buckets" which exports two tests called "bucketPolicies" and "preventDelete". This way, the API is called once, but multiple tests are run on the same results.Ensure AWS API calls are being used optimally. For example, call describeInstances with empty parameters to get all instances, instead of calling describeInstances multiple times looping through each instance name.Use async.eachLimit to reduce the number of simultaneous API calls. Instead of using a for loop on 100 requests, spread them out using async’s each limit.ExampleTo more clearly illustrate writing a new plugin, let’s consider the "IAM Empty Groups" plugin. First, we know that we will need to query for a list of groups via listGroups, then loop through each group and query for the more detailed set of data via getGroup.We’ll add these API calls to collect.js. First, under calls add:IAM: { listGroups: { property: ‘Groups’ }},The property tells CloudSploit which property to read in the response from AWS.Then, under postCalls, add:IAM: { getGroup: { reliesOnService: ‘iam’, reliesOnCall: ‘listGroups’, filterKey: ‘GroupName’, filterValue: ‘GroupName’ }},CloudSploit will first get the list of groups, then, it will loop through each one, using the group name to get more detailed info via getGroup.Next, we’ll write the plugin. Create a new file in the plugins/iam folder called emptyGroups.js (this plugin already exists, but you can create a similar one for the purposes of this example).In the file, we’ll be sure to export the plugin’s title, category, description, link, and more information about it. Additionally, we will add any API calls it makes:apis: [‘IAM:listGroups’, ‘IAM:getGroup’],In the run function, we can obtain the output of the collection phase from earlier by doing:var listGroups = helpers.addSource(cache, source, [‘iam’, ‘listGroups’, region]);Then, we can loop through each of the results and do:var getGroup = helpers.addSource(cache, source, [‘iam’, ‘getGroup’, region, group.GroupName]);The helpers function ensures that the proper results are returned from the collection and that they are saved into a "source" variable which can be returned with the results.Now, we can write the plugin functionality by checking for the data relevant to our requirements:if (!getGroup || getGroup.err || !getGroup.data || !getGroup.data.Users) { helpers.addResult(results, 3, ‘Unable to query for group: ‘ + group.GroupName, ‘global’, group.Arn);} else if (!getGroup.data.Users.length) { helpers.addResult(results, 1, ‘Group: ‘ + group.GroupName + ‘ does not contain any users’, ‘global’, group.Arn); return cb();} else { helpers.addResult(results, 0, ‘Group: ‘ + group.GroupName + ‘ contains ‘ + getGroup.data.Users.length + ‘ user(s)’, ‘global’, group.Arn);}The addResult function ensures we are adding the results to the results array in the proper format. This function accepts the following:(results array, score, message, region, resource)The resource is optional, and the score must be between 0 and 3 to indicate PASS, WARN, FAIL, or UNKNOWN.Download CloudSploit Scans

Link: http://feedproxy.google.com/~r/PentestTools/~3/kO89DoOlQUw/cloudsploit-scans-aws-security-scanning.html

WAF Buster – Disrupt WAF By Abusing SSL/TLS Ciphers

Disrupt WAF by abusing SSL/TLS CiphersAbout WAF_busterThis tool was created to Analyze the ciphers that are supported by the Web application firewall being used at the web server end. (Reference: https://0x09al.github.io/waf/bypass/ssl/2018/07/02/web-application-firewall-bypass.html) It works by first triggering SslScan to look for all the supported ciphers during SSL/TLS negotiation with the web server.After getting the text file of all the supported ciphers, then we use Curl to query web server with each and every Cipher to check which of the ciphers are unsupported by WAF and supported by Web server , if any such Cipher is found then a message is displayed that “Firewall is bypassed".ScreenshotsInstallationgit clone https://github.com/viperbluff/WAF_buster.git Python2This tool has been created using Python2 and below modules have been used throughout:-1.requests2.os3.sys4.subprocessUsage Open terminal python2 WAF_buster.py –inputDownload WAF_buster

Link: http://feedproxy.google.com/~r/PentestTools/~3/0fQO7UVapz0/waf-buster-disrupt-waf-by-abusing.html

Aws_Public_Ips – Fetch All Public IP Addresses Tied To Your AWS Account

aws_public_ips is a tool to fetch all public IP addresses (both IPv4/IPv6) associated with an AWS account.It can be used as a library and as a CLI, and supports the following AWS services (all with both Classic & VPC flavors):APIGatewayCloudFrontEC2 (and as a result: ECS, EKS, Beanstalk, Fargate, Batch, & NAT Instances)ElasticSearchELB (Classic ELB)ELBv2 (ALB/NLB)LightsailRDSRedshiftIf a service isn’t listed (S3, ElastiCache, etc) it’s most likely because it doesn’t have anything to support (i.e. it might not be deployable publicly, it might have all ip addresses resolve to global AWS infrastructure, etc).Quick startInstall the gem and run it:$ gem install aws_public_ips# Uses default ~/.aws/credentials$ aws_public_ips52.84.11.1352.84.11.832600:9000:2039:ba00:1a:cd27:1440:93a12600:9000:2039:6e00:1a:cd27:1440:93a1# With a custom profile$ AWS_PROFILE=production aws_public_ips52.84.11.159CLI reference$ aws_public_ips –helpUsage: aws_public_ips [options] -s, –services ,<s2>,<s3> List of AWS services to check. Available services: apigateway,cloudfront,ec2,elasticsearch,elb,elbv2,lightsail,rds,redshift. Defaults to all. -f, –format <format> Set output format. Available formats: json,prettyjson,text. Defaults to text. -v, –[no-]verbose Enable debug/trace output –version Print version -h, –help Show this help messageConfigurationFor authentication aws_public_ips uses the default aws-sdk-ruby configuration, meaning that the following are checked in order:Environment variables:AWS_ACCESS_KEY_IDAWS_SECRET_ACCESS_KEYAWS_REGIONAWS_PROFILEShared credentials files:~/.aws/credentials~/.aws/configInstance profile via metadata endpoint (if running on EC2, ECS, EKS, or Fargate)For more information see the AWS SDK documentation on configuration.IAM permissionsTo find the public IPs from all AWS services, the minimal policy needed by your IAM user is:{ “Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "apigateway:GET", "cloudfront:ListDistributions", "ec2:DescribeInstances", "elasticloadbalancing:DescribeLoadBalancers", "lightsail:GetInstances", "lightsail:GetLoadBalancers", "rds:DescribeDBInstances", "redshift:DescribeClusters" ], "Resource": "*" } ]}ContactFeel free to tweet or direct message: @arkadiytDownload Aws_Public_Ips

Link: http://feedproxy.google.com/~r/PentestTools/~3/aLYdLNP_wx4/awspublicips-fetch-all-public-ip.html

Resource-Counter – This Command Line Tool Counts The Number Of Resources In Different Categories Across Amazon Regions

This command line tool counts the number of resources in different categories across Amazon regions.This is a simple Python app that will count resources across different regions and display them on the command line. It first shows the dictionary of the results for the monitored services on a per-region basis, then it shows totals across all regions in a friendlier format. It tries to use the most-efficient query mechanism for each resource in order to manage the impact of API activity. I wrote this to help me scope out assessments and know where resources are in a target account.The development plan is to upgrade the output (probably to CSV file) and to continue to add services. If you have a specific service you want to see added just add a request in the comments.The current list incluides:Application and Network Load BalancersAutoscale GroupsClassic Load BalancersCloudTrail TrailsCloudwatch RulesConfig RulesDynamo TablesElastic IP AddressesGlacier VaultsIAM GroupsImagesInstancesKMS KeysLambda FunctionsLaunch ConfigurationsNAT GatewaysNetwork ACLsIAM PoliciesRDS InstancesIAM RolesS3 BucketsSAML ProvidersSNS TopicsSecurity GroupsSnapshotsSubnetsIAM UsersVPC EndpointsVPC Peering ConnectionVPCsVolumesUsage:To install just copy it where you want it and instally the requirements:pip install -r ./requirements.txtThis was written in Python 3.6.To run:python count_resources.py By default, it will use whatever AWS credentials are alerady configued on the system. You can also specify an access key/secret at runtime and this is not stored. It only neeeds read permissions for the listed services- I use the ReadOnlyAccess managed policy, but you should also be able to use the SecurityAudit policy.Usage: count_resources.py [OPTIONS]Options: –access TEXT AWS Access Key. Otherwise will use the standard credentials path for the AWS CLI. –secret TEXT AWS Secret Key –profile TEXT If you have multiple credential profiles, use this option to specify one. –help Show this message and exit.Sample Output:Establishing AWS session using the profile- dev Current account ID: xxxxxxxxxx Counting resources across regions. This will take a few minutes…Resources by region {‘ap-northeast-1’: {‘instances’: 0, ‘volumes’: 0, ‘security_groups’: 1, ‘snapshots’: 0, ‘images’: 0, ‘vpcs’: 1, ‘subnets’: 3, ‘peering connections’: 0, ‘network ACLs’: 1, ‘elastic IPs’: 0, ‘NAT gateways’: 0, ‘VPC Endpoints’: 0, ‘autoscale groups’: 0, ‘launch configurations’: 0, ‘classic load balancers’: 0, ‘application and network load balancers’: 0, ‘lambdas’: 0, ‘glacier vaults’: 0, ‘cloudwatch rules’: 0, ‘config rules’: 0, ‘cloudtrail trails’: 1, ‘sns topics’: 0, ‘kms keys’: 0, ‘dynamo tables’: 0, ‘rds instances’: 0}, ‘ap-northeast-2’: {‘instances’: 0, ‘volumes’: 0, ‘security_groups’: 1, ‘snapshots’: 0, ‘images’: 0, ‘vpcs’: 1, ‘subnets’: 2, ‘peering connections’: 0, ‘network ACLs’: 1, ‘elastic IPs’: 0, ‘NAT gateways’: 0, ‘VPC Endpoints’: 0, ‘autoscale groups’: 0, ‘launch configurations’: 0, ‘classic load balancers’: 0, ‘application and network load balancers’: 0, ‘lambdas’: 0, ‘glacier vaults’: 0, ‘cloudwatch rules’: 0, ‘config rules’: 0, ‘cloudtrail trails’: 1, ‘sns topics’: 0, ‘kms keys’: 0, ‘dynamo tables’: 0, ‘rds instances’: 0}, ‘ap-south-1’: {‘instances’: 0, ‘volumes’: 0, ‘security_groups’: 1, ‘snapshots’: 0, ‘images’: 0, ‘vpcs’: 1, ‘subnets’: 2, ‘peering connections’: 0, ‘network ACLs’: 1, ‘elastic IPs’: 0, ‘NAT gateways’: 0, ‘VPC Endpoints’: 0, ‘autoscale groups’: 0, ‘launch configurations’: 0, ‘classic load balancers’: 0, ‘application and network load balancers’: 0, ‘lambdas’: 0, ‘glacier vaults’: 0, ‘cloudwatch rules’: 0, ‘config rules’: 0, ‘cloudtrail trails’: 1, ‘sns topics’: 0, ‘kms keys’: 0, ‘dynamo tables’: 0, ‘rds instances’: 0}, ‘ap-southeast-1’: {‘instances’: 0, ‘volumes’: 0, ‘security_groups’: 1, ‘snapshots’: 0, ‘images’: 0, ‘vpcs’: 1, ‘subnets’: 3, ‘peering connections’: 0, ‘network ACLs’: 1, ‘elastic IPs’: 0, ‘NAT gateways’: 0, ‘VPC Endpoints’: 0, ‘autoscale groups’: 0, ‘launch configurations’: 0, ‘classic load balancers’: 0, ‘application and network load balancers’: 0, ‘lambdas’: 0, ‘glacier vaults’: 0, ‘cloudwatch rules’: 0, ‘config rules’: 0, ‘cloudtrail trails’: 1, ‘sns topics’: 0, ‘kms keys’: 0, ‘dynamo tables’: 0, ‘rds instances’: 0}, ‘ap-southeast-2’: {‘instances’: 0, ‘volumes’: 0, ‘security_groups’: 1, ‘snapshots’: 0, ‘images’: 0, ‘vpcs’: 1, ‘subnets’: 3, ‘peering connections’: 0, ‘network ACLs’: 1, ‘elastic IPs’: 0, ‘NAT gateways’: 0, ‘VPC Endpoints’: 0, ‘autoscale groups’: 0, ‘launch configurations’: 0, ‘classic load balancers’: 0, ‘application and network load balancers’: 0, ‘lambdas’: 0, ‘glacier vaults’: 0, ‘cloudwatch rules’: 0, ‘config rules’: 0, ‘cloudtrail trails’: 1, ‘sns topics’: 0, ‘kms keys’: 0, ‘dynamo tables’: 0, ‘rds instances’: 0}, ‘ca-central-1’: {‘instances’: 0, ‘volumes’: 0, ‘security_groups’: 1, ‘snapshots’: 0, ‘images’: 0, ‘vpcs’: 1, ‘subnets’: 2, ‘peering connections’: 0, ‘network ACLs’: 1, ‘elastic IPs’: 0, ‘NAT gateways’: 0, ‘VPC Endpoints’: 0, ‘autoscale groups’: 0, ‘launch configurations’: 0, ‘classic load balancers’: 0, ‘application and network load balancers’: 0, ‘lambdas’: 0, ‘glacier vaults’: 0, ‘cloudwatch rules’: 0, ‘config rules’: 0, ‘cloudtrail trails’: 1, ‘sns topics’: 0, ‘kms keys’: 0, ‘dynamo tables’: 0, ‘rds instances’: 0}, ‘eu-central-1’: {‘instances’: 0, ‘volumes’: 0, ‘security_groups’: 1, ‘snapshots’: 0, ‘images’: 0, ‘vpcs’: 1, ‘subnets’: 3, ‘peering connections’: 0, ‘network ACLs’: 1, ‘elastic IPs’: 0, ‘NAT gateways’: 0, ‘VPC Endpoints’: 0, ‘autoscale groups’: 0, ‘launch configurations’: 0, ‘classic load balancers’: 0, ‘application and network load balancers’: 0, ‘lambdas’: 0, ‘glacier vaults’: 0, ‘cloudwatch rules’: 0, ‘config rules’: 0, ‘cloudtrail trails’: 1, ‘sns topics’: 0, ‘kms keys’: 0, ‘dynamo tables’: 0, ‘rds instances’: 0}, ‘eu-west-1’: {‘instances’: 0, ‘volumes’: 0, ‘security_groups’: 1, ‘snapshots’: 0, ‘images’: 0, ‘vpcs’: 1, ‘subnets’: 3, ‘peering connections’: 0, ‘network ACLs’: 1, ‘elastic IPs’: 0, ‘NAT gateways’: 0, ‘VPC Endpoints’: 0, ‘autoscale groups’: 0, ‘launch configurations’: 0, ‘classic load balancers’: 0, ‘application and network load balancers’: 0, ‘lambdas’: 0, ‘glacier vaults’: 0, ‘cloudwatch rules’: 0, ‘config rules’: 0, ‘cloudtrail trails’: 1, ‘sns topics’: 0, ‘kms keys’: 0, ‘dynamo tables’: 0, ‘rds instances’: 0}, ‘eu-west-2’: {‘instances’: 3, ‘volumes’: 3, ‘security_groups’: 1, ‘snapshots’: 0, ‘images’: 0, ‘vpcs’: 1, ‘subnets’: 3, ‘peering connections’: 0, ‘network ACLs’: 1, ‘elastic IPs’: 0, ‘NAT gateways’: 0, ‘VPC Endpoints’: 0, ‘autoscale groups’: 0, ‘launch configurations’: 0, ‘classic load balancers’: 0, ‘application and network load balancers’: 0, ‘lambdas’: 0, ‘glacier vaults’: 0, ‘cloudwatch rules’: 0, ‘config rules’: 0, ‘cloudtrail trails’: 1, ‘sns topics’: 0, ‘kms keys’: 0, ‘dynamo tables’: 0, ‘rds instances’: 0}, ‘eu-west-3’: {‘instances’: 0, ‘volumes’: 0, ‘security_groups’: 1, ‘snapshots’: 0, ‘images’: 0, ‘vpcs’: 1, ‘subnets’: 3, ‘peering connections’: 0, ‘network ACLs’: 1, ‘elastic IPs’: 0, ‘NAT gateways’: 0, ‘VPC Endpoints’: 0, ‘autoscale groups’: 0, ‘launch configurations’: 0, ‘classic load balancers’: 0, ‘application and network load balancers’: 0, ‘lambdas’: 0, ‘glacier vaults’: 0, ‘cloudwatch rules’: 0, ‘config rules’: 0, ‘cloudtrail trails’: 1, ‘sns topics’: 0, ‘kms keys’: 0, ‘dynamo tables’: 0, ‘rds instances’: 0}, ‘sa-east-1’: {‘instances’: 0, ‘volumes’: 0, ‘security_groups’: 1, ‘snapshots’: 0, ‘images’: 0, ‘vpcs’: 1, ‘subnets’: 3, ‘peering connections’: 0, ‘network ACLs’: 1, ‘elastic IPs’: 0, ‘NAT gateways’: 0, ‘VPC Endpoints’: 0, ‘autoscale groups’: 0, ‘launch configurations’: 0, ‘classic load balancers’: 0, ‘application and network load balancers’: 0, ‘lambdas’: 0, ‘cloudwatch rules’: 0, ‘config rules’: 0, ‘cloudtrail trails’: 1, ‘sns topics’: 0, ‘kms keys’: 0, ‘dynamo tables’: 0, ‘rds instances’: 0}, ‘us-east-1’: {‘instances’: 2, ‘volumes’: 2, ‘security_groups’: 19, ‘snapshots’: 0, ‘images’: 0, ‘vpcs’: 2, ‘subnets’: 3, ‘peering connections’: 0, ‘network ACLs’: 2, ‘elastic IPs’: 0, ‘NAT gateways’: 0, ‘VPC Endpoints’: 0, ‘autoscale groups’: 0, ‘launch configurations’: 0, ‘classic load balancers’: 0, ‘application and network load balancers’: 0, ‘lambdas’: 0, ‘glacier vaults’: 0, ‘cloudwatch rules’: 0, ‘config rules’: 1, ‘cloudtrail trails’: 2, ‘sns topics’: 3, ‘kms keys’: 5, ‘dynamo tables’: 0, ‘rds instances’: 0}, ‘us-east-2’: {‘instances’: 0, ‘volumes’: 0, ‘security_groups’: 2, ‘snapshots’: 0, ‘images’: 0, ‘vpcs’: 1, ‘subnets’: 3, ‘peering connections’: 0, ‘network ACLs’: 1, ‘elastic IPs’: 0, ‘NAT gateways’: 0, ‘VPC Endpoints’: 0, ‘autoscale groups’: 0, ‘launch configurations’: 0, ‘classic load balancers’: 0, ‘application and network load balancers’: 0, ‘lambdas’: 0, ‘glacier vaults’: 0, ‘cloudwatch rules’: 0, ‘config rules’: 0, ‘cloudtrail trails’: 1, ‘sns topics’: 0, ‘kms keys’: 0, ‘dynamo tables’: 0, ‘rds instances’: 0}, ‘us-west-1’: {‘instances’: 1, ‘volumes’: 3, ‘security_groups’: 14, ‘snapshots’: 1, ‘images’: 0, ‘vpcs’: 0, ‘subnets’: 0, ‘peering connections’: 0, ‘network ACLs’: 0, ‘elastic IPs’: 0, ‘NAT gateways’: 0, ‘VPC Endpoints’: 0, ‘autoscale groups’: 0, ‘launch configurations’: 0, ‘classic load balancers’: 0, ‘application and network load balancers’: 0, ‘lambdas’: 0, ‘glacier vaults’: 0, ‘cloudwatch rules’: 0, ‘config rules’: 0, ‘cloudtrail trails’: 1, ‘sns topics’: 0, ‘kms keys’: 1, ‘dynamo tables’: 0, ‘rds instances’: 0}, ‘us-west-2’: {‘instances’: 9, ‘volumes’: 29, ‘security_groups’: 76, ‘snapshots’: 171, ‘images’: 104, ‘vpcs’: 7, ‘subnets’: 15, ‘peering connections’: 1, ‘network ACLs’: 8, ‘elastic IPs’: 7, ‘NAT gateways’: 1, ‘VPC Endpoints’: 0, ‘autoscale groups’: 1, ‘launch configurations’: 66, ‘classic load balancers’: 1, ‘application and network load balancers’: 2, ‘lambdas’: 10, ‘glacier vaults’: 1, ‘cloudwatch rules’: 8, ‘config rules’: 1, ‘cloudtrail trails’: 1, ‘sns topics’: 6, ‘kms keys’: 7, ‘dynamo tables’: 1, ‘rds instances’: 0}}Resource totals across all regions Application and Network Load Balancers : 2 Autoscale Groups : 1 Classic Load Balancers : 1 CloudTrail Trails : 16 Cloudwatch Rules : 8 Config Rules : 2 Dynamo Tables : 1 Elastic IP Addresses : 7 Glacier Vaults : 1 Groups : 12 Images : 104 Instances : 15 KMS Keys : 13 Lambda Functions : 10 Launch Configurations : 66 NAT Gateways : 1 Network ACLs : 22 Policies : 15 RDS Instances : 0 Roles : 40 S3 Buckets : 31 SAML Providers : 1 SNS Topics : 9 Security Groups : 122 Snapshots : 172 Subnets : 51 Users : 14 VPC Endpoints : 0 VPC Peering Connections : 1 VPCs : 21 Volumes : 37Total resources: 796Download Resource-Counter

Link: http://feedproxy.google.com/~r/PentestTools/~3/0QCDjS_vnjY/resource-counter-this-command-line-tool.html

Rootstealer – X11 Trick To Inject Commands On Root Terminal

This is simple example of new attack that using X11. Program to detect when linux user opens terminal with root and inject intrusive commands in terminal with X11 lib.Video of Proof of conceptThe proposal of this video is use the tool rootstealer to spy all gui windows interactions and inject commands only in root terminal. This approach is util when attacker need to send a malicious program to prove that user is vulnerable to social engineering. Force root command in terminal with lib X11 is a exotic way to show the diversity of weak points.Install# apt-get install libX11-dev libxtst-dev# cd rootstealer/sendkeys; Edit file rootstealer/cmd.cfg and write your command to inject.Now you can take that following:# make; cd .. #to back to path rootstealer/ # pip intall gior# pip install girRun the python script to spy all windows gui and search window with “root@" string in title.$ python rootstealer.py &Note: If you prefers uses full C code… to use simple binary purposes… you can uses rootstealer.c$ sudo apt-get install libwnck-dev$ gcc -o rootstealer rootstealer.c `pkg-config –cflags –libs libwnck-1.0` -DWNCK_I_KNOW_THIS_IS_UNSTABLE -DWNCK_COMPILATION$ ./rootstealer &Done, look the video demo, rootstealer force commands only on root terminal…MitigationDon’t trust in anyone. https://www.esecurityplanet.com/views/article.php/3908881/9-Best-Defenses-Against-Social-Engineering-Attacks.htmAlways when you enter by root user, change window title:# gnome-terminal –title="SOME TITLE HERE"This simple action can prevent this attack.TestsTested on Xubuntu 16.04Download Rootstealer

Link: http://feedproxy.google.com/~r/PentestTools/~3/-M-T8gOTCIc/rootstealer-x11-trick-to-inject.html

BlackEye – The Most Complete Phishing Tool, With 32 Templates +1 Customizable

BLACKEYE is an upgrade from original ShellPhish Tool (https://github.com/thelinuxchoice/shellphish) by thelinuxchoice under GNU LICENSE. It is the most complete Phishing Tool, with 32 templates +1 customizable. WARNING: IT ONLY WORKS ON LAN! This tool was made for educational purposes!Phishing Pages generated by An0nUD4Y (https://github.com/An0nUD4Y):InstagramPhishing Pages generated by Social Fish tool (UndeadSec) (https://github.com/UndeadSec/SocialFish):Facebook, Google, SnapChat, Twitter, MicrosoftPhishing Pages generated by @suljot_gjoka (https://github.com/whiteeagle0/blackeye):PayPal, eBay, CryptoCurrency, Verizon, DropBox, Adobe ID, Shopify, Messenger, TwitchMyspace, Badoo, VK, Yandex, devianARTLegal disclaimer:Usage of BlackEye for attacking targets without prior mutual consent is illegal. It’s the end user’s responsibility to obey all applicable local, state and federal laws. Developers assume no liability and are not responsible for any misuse or damage caused by this program. Only use for educational purposes.Usage:git clone https://github.com/thelinuxchoice/blackeyecd blackeyebash blackeye.shDownload Blackeye

Link: http://feedproxy.google.com/~r/PentestTools/~3/MvvGRkMVEwY/blackeye-most-complete-phishing-tool.html

Polymorph – A Real-Time Network Packet Manipulation Framework With Support For Almost All Existing Protocols

Polymorph is a framework written in Python 3 that allows the modification of network packets in real time, providing maximum control to the user over the contents of the packet. This framework is intended to provide an effective solution for real-time modification of network packets that implement practically any existing protocol, including private protocols that do not have a public specification. In addition to this, one of its main objectives is to provide the user with the maximum possible control over the contents of the packet and with the ability to perform complex processing on this information.InstallationDownload and installation on LinuxPolymorph is specially designed to be installed and run on a Linux operating system, such as Kali Linux. Before installing the framework, the following requirements must be installed:apt-get install build-essential python-dev libnetfilter-queue-dev tshark tcpdump python3-pip wiresharkAfter the installation of the dependencies, the framework itself can be installed with the Python pip package manager in the following way:pip3 install polymorphDocker environmentFrom the project root:docker-compose up -dTo access any of the machines of the environment:docker exec -ti [polymorph | alice | bob] bashUsing PolymorphThe Polymorph framework is composed of two main interfaces:Polymorph: It consists of a command console interface. It is the main interface and it is recommended to use it for complex tasks such as modifying complex protocols in the air, making modifications of types in fields of the template or modifying protocols without public specification.Phcli: It is the command line interface of the Polymorph framework. It is recommended to use for tasks such as modification of simple protocols or execution of previously generated templates.Using the Polymorph main interfaceFor examples and documentation please refer to:English whitepaperSpanish whitepaperBuilding a Proxy Fuzzer for the MQTT protocol with PolymorphUsing the PhcliDissecting almost any network protocolLet’s start by seeing how Polymorph dissects the fields of different network protocols, it will be useful to refer to them if we want to modify any of this fields in real time. You can try any protocol that comes to your mind.HTTP protocol, show only the HTTP layer and the fields belonging to it.# phcli –protocol http –show-fieldsShow the full HTTP packet and the fields belonging to it.# phcli –protocol http –show-packetYou can also apply filters on network packets, for example, you can indicate that only those containing a certain string or number are displayed.# phcli -p dns –show-fields –in-pkt “phrack"# phcli -p icmp –show-packet –in-pkt "84" –type "int"You can also concatenate filters.# phcli -p http –show-packet –in-pkt "phrack;GET;issues"# phcli -p icmp –show-packet –in-pkt "012345;84" –type "str;int"You can filter by the name of the fields that the protocol contains, but bear in mind that this name is the one that Polymorph provides when it dissects the network packet.# phcli -p icmp –show-packet –field "chksum"You can also concatenate fields.# phcli -p mqtt –show-packet –field "topic;msg"Modifying network packets in real timeNow that we know the Polymorph representation of the network packet that we want to modify, we will see how to modify it in real time.Let’s start with some examples. All the filters explained during the previous section can also be applied here.This will just modify a packet that contains the strings /issues/40/1.html and GET by inserting in the request_uri field the value /issues/61/1.html. So when the user visit http://phrack.org/issues/40/1.html the browser will visit http://phrack.org/issues/61/1.html# phcli -p http –field "request_uri" –value "/issues/61/1.html" –in-pkt "/issues/40/1.html;GET"The previous command will work if we are in the middle of the communication between a machine and the gateway. Probably the user wants to establish himself in the middle, for this he can use arp spoofing.# phcli –spoof arp –target 192.168.1.20 –gateway 192.168.1.1 -p http -f "request_uri" -v "/issues/61/1.html" –in-pkt "/issues/40/1.html;GET"Or maybe the user wants to try it on localhost, for that he only has to modify the iptables rule that Polymorph establishes by default.# phcli -p http -f "request_uri" -v "/issues/61/1.html" –in-pkt "/issues/40/1.html;GET" -ipt "iptables -A OUTPUT -j NFQUEUE –queue-num 1"It may be the case that the user wants to modify a set of bytes of a network packet that have not been interpreted as a field by Polymorph. For this you can directly access the packet bytes using a slice. (Remember to add the iptables rule if you try it in localhost)# phcli -p icmp –bytes "50:55" –value "hello" –in-pkt "012345"# phcli -p icmp -b "\-6:\-1" –value "hello" –in-pkt "012345"# phcli -p tcp -b "\-54:\-20" -v ‘">