CloudSploit Scans – AWS Security Scanning Checks

CloudSploit scans is an open-source project designed to allow detection of security risks in an AWS account. These scripts are designed to run against an AWS account and return a series of potential misconfigurations and security risks.InstallationEnsure that NodeJS is installed. If not, install it from here.git clone git@github.com:cloudsploit/scans.gitnpm installSetupTo begin using the scanner, edit the index.js file with your AWS key, secret, and optionally (for temporary credentials), a session token. You can also set a file containing credentials. To determine the permissions associated with your credentials, see the permissions section below. In the list of plugins in the exports.js file, comment out any plugins you do not wish to run. You can also skip entire regions by modifying the skipRegions array.You can also set the typical environment variables expected by the aws sdks, namely AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_SESSION_TOKEN.Cross Account RolesWhen using the hosted scanner, you’ll need to create a cross-account IAM role. Cross-account roles enable you to share access to your account with another AWS account using the same policy model that you’re used to. The advantage is that cross-account roles are much more secure than key-based access, since an attacker who steals a cross-account role ARN still can’t make API calls unless they also infiltrate the authorized AWS account.To create a cross-account role:Navigate to the IAM console.Click “Roles" and then "Create New Role".Provide a role name (suggested "cloudsploit").Select the "Role for Cross-Account Access" radio button.Click the "Select" button next to "Allows IAM users from a 3rd party AWS account to access this account."Enter 057012691312 for the account ID (this is the ID of CloudSploit’s AWS account).Copy the auto-generated external ID from the CloudSploit web page and paste it into the AWS IAM console textbox.Ensure that "Require MFA" is not selected.Click "Next Step".Select the "Security Audit" policy. Then click "Next Step" again.Click through to create the role.PermissionsThe scans require read-only permissions to your account. This can be done by adding the "Security Audit" AWS managed policy to your IAM user or role.Security Audit Managed Policy (Recommended)To configure the managed policy:Open the IAM Console.Find your user or role.Click the "Permissions" tab.Under "Managed Policy", click "Attach policy".In the filter box, enter "Security Audit"Select the "Security Audit" policy and save.Inline Policy (Not Recommended)If you’d prefer to be more restrictive, the following IAM policy contains the exact permissions used by the scan.WARNING: This policy will likely change as more plugins are written. If a test returns "UNKNOWN" it is likely missing a required permission. The preferred method is to use the "Security Audit" policy.{ "Version": "2012-10-17", "Statement": [ { "Action": [ "cloudfront:ListDistributions", "cloudtrail:DescribeTrails", "configservice:DescribeConfigurationRecorders", "configservice:DescribeConfigurationRecorderStatus", "ec2:DescribeInstances", "ec2:DescribeSecurityGroups", "ec2:DescribeAccountAttributes", "ec2:DescribeAddresses", "ec2:DescribeVpcs", "ec2:DescribeFlowLogs", "ec2:DescribeSubnets", "elasticloadbalancing:DescribeLoadBalancerPolicies", "elasticloadbalancing:DescribeLoadBalancers", "iam:GenerateCredentialReport", "iam:ListServerCertificates", "iam:ListGroups", "iam:GetGroup", "iam:GetAccountPasswordPolicy", "iam:ListUsers", "iam:ListUserPolicies", "iam:ListAttachedUserPolicies", "kms:ListKeys", "kms:DescribeKey", "kms:GetKeyRotationStatus", "rds:DescribeDBInstances", "rds:DescribeDBClusters", "route53domains:ListDomains", "s3:GetBucketVersioning", "s3:GetBucketLogging", "s3:GetBucketAcl", "s3:ListBuckets", "ses:ListIdentities", "ses:getIdentityDkimAttributes" ], "Effect": "Allow", "Resource": "*" } ]}RunningTo run a standard scan, showing all outputs and results, simply run:node index.jsOptional PluginsSome plugins may require additional permissions not outlined above. Since their required IAM permissions are not included in the SecurityAudit managed policy, these plugins are not included in the exports.js file by default. To enable these plugins, uncomment them from the exports.js file, if applicable, add the policies required to an inline IAM policy, and re-run the scan.ComplianceCloudSploit also supports mapping of its plugins to particular compliance policies. To run the compliance scan, use the –compliance flag. For example:node index.js –compliance=hipaaCloudSploit currently supports the following compliance mappings:HIPAAHIPAA scans map CloudSploit plugins to the Health Insurance Portability and Accountability Act of 1996.ArchitectureCloudSploit works in two phases. First, it queries the AWS APIs for various metadata about your account. This is known as the "collection" phase. Once all the necessary data has been collected, the result is passed to the second phase – "scanning." The scan uses the collected data to search for potential misconfigurations, risks, and other security issues. These are then provided as output.Writing a PluginCollection PhaseTo write a plugin, you must understand what AWS API calls your scan makes. These must be added to the collect.js file. This file determines the AWS API calls and the order in which they are made. For example:CloudFront: { listDistributions: { property: ‘DistributionList’, secondProperty: ‘Items’ }},This declaration tells the CloudSploit collection engine to query the CloudFront service using the listDistributions call and then save the results returned under DistributionList.Items.The second section in collect.js is postcalls, which is an array of objects defining API calls that rely on other calls being returned first. For example, if you need to first query for all EC2 instances, and then loop through each instance and run a more detailed call, you would add the EC2:DescribeInstances call in the first calls section and then add the more detailed call in postCalls setting it to rely on the output of DescribeInstances.An example:getGroup: { reliesOnService: ‘iam’, reliesOnCall: ‘listGroups’, filterKey: ‘GroupName’, filterValue: ‘GroupName’},This section tells CloudSploit to wait until the IAM:listGroups call has been made, and then loop through the data that is returned. The filterKey tells CloudSploit the name of the key from the original response, while filterValue tells it which property to set in the getGroup call filter. For example: iam.getGroup({GroupName:abc}) where abc is the GroupName from the returned list. CloudSploit will loop through each response, re-invoking getGroup for each element.Scanning PhaseAfter the data has been collected, it is passed to the scanning engine when the results are analyzed for risks. Each plugin must export the following:Exports the following:title (string): a user-friendly title for the plugincategory (string): the AWS category (EC2, RDS, ELB, etc.)description (string): a description of what the plugin doesmore_info (string): a more detailed description of the risk being tested forlink (string): an AWS help URL describing the service or risk, preferably with mitigation methodsrecommended_action (string): what the user should do to mitigate the risk foundrun (function): a function that runs the test (see below)Accepts a collection object via the run function containing the full collection object obtained in the first phase.Calls back with the results and the data source.Result CodesEach test has a result code that is used to determine if the test was successful and its risk level. The following codes are used:0: OKAY: No risks1: WARN: The result represents a potential misconfiguration or issue but is not an immediate risk2: FAIL: The result presents an immediate risk to the security of the account3: UNKNOWN: The results could not be determined (API failure, wrong permissions, etc.)Tips for Writing PluginsMany security risks can be detected using the same API calls. To minimize the number of API calls being made, utilize the cache helper function to cache the results of an API call made in one test for future tests. For example, two plugins: "s3BucketPolicies" and "s3BucketPreventDelete" both call APIs to list every S3 bucket. These can be combined into a single plugin "s3Buckets" which exports two tests called "bucketPolicies" and "preventDelete". This way, the API is called once, but multiple tests are run on the same results.Ensure AWS API calls are being used optimally. For example, call describeInstances with empty parameters to get all instances, instead of calling describeInstances multiple times looping through each instance name.Use async.eachLimit to reduce the number of simultaneous API calls. Instead of using a for loop on 100 requests, spread them out using async’s each limit.ExampleTo more clearly illustrate writing a new plugin, let’s consider the "IAM Empty Groups" plugin. First, we know that we will need to query for a list of groups via listGroups, then loop through each group and query for the more detailed set of data via getGroup.We’ll add these API calls to collect.js. First, under calls add:IAM: { listGroups: { property: ‘Groups’ }},The property tells CloudSploit which property to read in the response from AWS.Then, under postCalls, add:IAM: { getGroup: { reliesOnService: ‘iam’, reliesOnCall: ‘listGroups’, filterKey: ‘GroupName’, filterValue: ‘GroupName’ }},CloudSploit will first get the list of groups, then, it will loop through each one, using the group name to get more detailed info via getGroup.Next, we’ll write the plugin. Create a new file in the plugins/iam folder called emptyGroups.js (this plugin already exists, but you can create a similar one for the purposes of this example).In the file, we’ll be sure to export the plugin’s title, category, description, link, and more information about it. Additionally, we will add any API calls it makes:apis: [‘IAM:listGroups’, ‘IAM:getGroup’],In the run function, we can obtain the output of the collection phase from earlier by doing:var listGroups = helpers.addSource(cache, source, [‘iam’, ‘listGroups’, region]);Then, we can loop through each of the results and do:var getGroup = helpers.addSource(cache, source, [‘iam’, ‘getGroup’, region, group.GroupName]);The helpers function ensures that the proper results are returned from the collection and that they are saved into a "source" variable which can be returned with the results.Now, we can write the plugin functionality by checking for the data relevant to our requirements:if (!getGroup || getGroup.err || !getGroup.data || !getGroup.data.Users) { helpers.addResult(results, 3, ‘Unable to query for group: ‘ + group.GroupName, ‘global’, group.Arn);} else if (!getGroup.data.Users.length) { helpers.addResult(results, 1, ‘Group: ‘ + group.GroupName + ‘ does not contain any users’, ‘global’, group.Arn); return cb();} else { helpers.addResult(results, 0, ‘Group: ‘ + group.GroupName + ‘ contains ‘ + getGroup.data.Users.length + ‘ user(s)’, ‘global’, group.Arn);}The addResult function ensures we are adding the results to the results array in the proper format. This function accepts the following:(results array, score, message, region, resource)The resource is optional, and the score must be between 0 and 3 to indicate PASS, WARN, FAIL, or UNKNOWN.Download CloudSploit Scans

Link: http://feedproxy.google.com/~r/PentestTools/~3/kO89DoOlQUw/cloudsploit-scans-aws-security-scanning.html

Raccoon – A High Performance Offensive Security Tool For Reconnaissance And Vulnerability Scanning

Offensive Security Tool for Reconnaissance and Information Gathering.FeaturesDNS detailsDNS visual mapping using DNS dumpsterWHOIS informationTLS Data – supported ciphers, TLS versions, certificate details and SANsPort ScanServices and scripts scanURL fuzzing and dir/file detectionSubdomain enumeration – uses Google dorking, DNS dumpster queries, SAN discovery and bruteforceWeb application data retrieval: CMS detectionWeb server info and X-Powered-Byrobots.txt and sitemap extractionCookie inspectionExtracts all fuzzable URLsDiscovers HTML formsRetrieves all Email addressesDetects known WAFsSupports anonymous routing through Tor/ProxiesUses asyncio for improved performanceSaves output to files – separates targets by folders and modules by filesRoadmap and TODOsSupport multiple hosts (read from file)Rate limit evasionOWASP vulnerabilities scan (RFI, RCE, XSS, SQLi etc.)SearchSploit lookup on resultsIP ranges supportCIDR notation supportMore output formatsAboutRaccoon is a tool made for reconnaissance and information gathering with an emphasis on simplicity.It will do everything from fetching DNS records, retrieving WHOIS information, obtaining TLS data, detecting WAF presence and up to threaded dir busting and subdomain enumeration. Every scan outputs to a corresponding file.As most of Raccoon’s scans are independent and do not rely on each other’s results, it utilizes Python’s asyncio to run most scans asynchronously.Raccoon supports Tor/proxy for anonymous routing. It uses default wordlists (for URL fuzzing and subdomain discovery) from the amazing SecLists repository but different lists can be passed as arguments.For more options – see “Usage".InstallationFor the latest stable version:pip install raccoon-scannerOr clone the GitHub repository for the latest features and changes:git clone https://github.com/evyatarmeged/Raccoon.gitcd Raccoonpython raccoon_src/main.pyPrerequisitesRaccoon uses Nmap to scan ports as well as utilizes some other Nmap scripts and features. It is mandatory that you have it installed before running Raccoon.OpenSSL is also used for TLS/SSL scans and should be installed as well.UsageUsage: raccoon [OPTIONS]Options: –version Show the version and exit. -t, –target TEXT Target to scan [required] -d, –dns-records TEXT Comma separated DNS records to query. Defaults to: A,MX,NS,CNAME,SOA,TXT –tor-routing Route HTTP traffic through Tor (uses port 9050). Slows total runtime significantly –proxy-list TEXT Path to proxy list file that would be used for routing HTTP traffic. A proxy from the list will be chosen at random for each request. Slows total runtime –proxy TEXT Proxy address to route HTTP traffic through. Slows total runtime -w, –wordlist TEXT Path to wordlist that would be used for URL fuzzing -T, –threads INTEGER Number of threads to use for URL Fuzzing/Subdomain enumeration. Default: 25 –ignored-response-codes TEXT Comma separated list of HTTP status code to ignore for fuzzing. Defaults to: 302,400,401,402,403,404,503,504 –subdomain-list TEXT Path to subdomain list file that would be used for enumeration -S, –scripts Run Nmap scan with -sC flag -s, –services Run Nmap scan with -sV flag -f, –full-scan Run Nmap scan with both -sV and -sC -p, –port TEXT Use this port range for Nmap scan instead of the default –tls-port INTEGER Use this port for TLS queries. Default: 443 –skip-health-check Do not test for target host availability -fr, –follow-redirects Follow redirects when fuzzing. Default: True –no-url-fuzzing Do not fuzz URLs –no-sub-enum Do not bruteforce subdomains -q, –quiet Do not output to stdout -o, –outdir TEXT Directory destination for scan output –help Show this message and exit.ScreenshotsHTB challenge example scan: Results folder tree after a scan:Download Raccoon

Link: http://feedproxy.google.com/~r/PentestTools/~3/qSSk6PggN6c/raccoon-high-performance-offensive.html

Portforge.Cr – A Script Which Opens Multiple Sockets From A Specific Port Range You Input

This script is intended to open as many sockets as you which between 1024 – 65535. Lower than 1024 works too but you have to be a root user for that.This can be useful when you don’t want people to map out your device and see what you’re running and not, so it’s a small step to defeat reconnaissance.Portforge uses a technique built-in the Crystal compiler called Fibers. They are very much like system threads but Fibers is a lot more lightweight & the execution is managed through the process 1.The larger range you pick, the longer it takes for the script to load up every socket but I’ve tried my best to optimize the script so it should just take a couple of minutes (depending on the system of course).The script works in 2 steps: It first performs its own scan on the system to see which port is already open. The open ports is then put on one list and the closed ports are put on another list. The next step is opening the closed ports, so the script picks the list with all the closed ports and opens a socket on every one of them.While the main fiber is opening a socket on every port, another fiber is called under the main one which listens for incoming connections and closes it directly. This process is repeated indefinitely, or until you interrupt the script.Usage:./portforge IP startport endportDemo: Download Portforge.Cr

Link: http://feedproxy.google.com/~r/PentestTools/~3/9bft7ec1wIk/portforgecr-script-which-opens-multiple.html

macSubstrate – Tool For Interprocess Code Injection On macOS

macSubstrate is a platform tool for interprocess code injection on macOS, with the similar function to Cydia Substrate on iOS. Using macSubstrate, you can inject your plugins (.bundle or .framework) into a mac app (including sandboxed apps) to tweak it in the runtime.All you need is to get or create plugins for your target app. No trouble with modification and codesign for the original target app. No more work after the target app is updated. Super easy to install or uninstall a plugin. Loading plugins automatically whenever the target app is relaunched. Providing a GUI app to make injection much easier.Prepare Disable SIP Why should disable SIP System Integrity Protection is a new security policy that applies to every running process, including privileged code and code that runs out of the sandbox. The policy extends additional protections to components on disk and at run-time, only allowing system binaries to be modified by the system installer and software updates. Code injection and runtime attachments to system binaries are no longer permitted.Usagedownload macSubstrate.app, put into /Applications and launch it. grant authorization if needed. install a plugin by importing or dragging into macSubstrate. launch the target app. step 3 and step 4 can be switched Once a plugin is installed by macSubstrate, it will take effect immediately. But if you want it to work whenever the target app is relaunched or macOS is restarted, you need to keep macSubstrate running and allow it to automatically launch at login. uninstall a plugin when you do not need it anymore. PluginmacSubstrate supports plugins of .bundle or .framework, so you just need to create a valid .bundle or .framework file. The most important thing is to add a key macSubstratePlugin into the info.plist, with the dictionary value: Key Value TargetAppBundleID the target app’s CFBundleIdentifier, this tells macSubstrate which app to inject. Description brief description of the plugin AuthorName author name of the plugin AuthorEmail author email of the plugin Please check the demo plugins demo.bundle and demo.framework for details.Xcode TemplatesmacSubstrate also provides Xcode Templates to help you create plugins conveniently: ln -fhs ./macSubstratePluginTemplate ~/Library/Developer/Xcode/Templates/macSubstrate\ Plugin Launch Xcode, and there will be 2 new plugin templates for you. SecuritySIP is a new security policy on macOS, which will help to keep you away from potential security risk. Disable it means you will lose the protection from SIP.If you install a plugin from a developer, you should be responsible for the security of the plugin. If you do not trust it, please do not install it. macSubstrate will help to verify the code signature of a plugin, and I suggest you to scan it using VirusTotal. Anyway, macSubstrate is just a tool, and it is your choice to decide what plugin to install.Download macSubstrate

Link: http://feedproxy.google.com/~r/PentestTools/~3/x90tIMr7nMY/macsubstrate-tool-for-interprocess-code.html

Pure Blood v2.0 – A Penetration Testing Framework Created For Hackers / Pentester / Bug Hunter

A Penetration Testing Framework created for Hackers / Pentester / Bug Hunter.Web Pentest / Information Gathering:Banner GrabWhoisTracerouteDNS RecordReverse DNS LookupZone Transfer LookupPort ScanAdmin Panel ScanSubdomain ScanCMS IdentifyReverse IP LookupSubnet LookupExtract Page LinksDirectory Fuzz (NEW)File Fuzz (NEW)Shodan Search (NEW)Shodan Host Lookup (NEW)Web Application Attack: (NEW)Wordpress     | WPScan     | WPScan Bruteforce     | WordPress Plugin Vulnerability Checker         Features: // I will add more soon.         | WordPress Woocommerce – Directory Craversal         | WordPress Plugin Booking Calendar 3.0.0 – SQL Injection / Cross-Site Scripting         | WordPress Plugin WP with Spritz 1.0 – Remote File Inclusion         | WordPress Plugin Events Calendar – ‘event_id’ SQL InjectionAuto SQL Injection     Features:     | Union Based     | (Error Output = False) Detection     | Tested on 100+ WebsitesGenerator:Deface PagePassword Generator // NEWText To Hash //NEWInstallationAny Python Version.$ git clone https://github.com/cr4shcod3/pureblood$ cd pureblood$ pip install -r requirements.txtDEMOWeb Pentest Web Application Attack Build WithColoramaRequestsPython-whoisDnspythonBeautifulSoupShodanAuthorsCr4sHCoD3 – Pure BloodDownload Pure Blood v2.0

Link: http://feedproxy.google.com/~r/PentestTools/~3/PcrKCodaoSA/pure-blood-v20-penetration-testing.html

Cred Scanner – A Simple File-Based Scanner To Look For Potential AWS Access And Secret Keys In Files

A simple command line tool for finding AWS credentials in files. Optimized for use with Jenkins and other CI systems.I suspect there are other, better tools out there (such as git-secrets), but I couldn’t find anything to run a quick and dirty scan that also integrates well with Jenkins.Usage:To install just copy it where you want it and install the requirements:pip install -r ./requirements.txtThis was written in Python 3.6.To run:python cred_scanner.py That will scan the local directory and all subdirectories. It will list the files, which ones have potential access keys, and which files can’t be scanned due to the file format. cred_scanner exits with a code of 1 if it finds any potential keys.Usage: cred_scanner.py [OPTIONS]Options: –path TEXT Path other than the local directory to scan –secret Also look for Secret Key patterns. This may result in many false matches due to the nature of secret keys. –help Show this message and exit.To run as a test in Jenkins just use the command line or add it as a step to your Jenkins build. Jenkins will automatically fail the build if it sees the exit code 1.Download Cred Scanner

Link: http://feedproxy.google.com/~r/PentestTools/~3/TbqapF5_yuQ/cred-scanner-simple-file-based-scanner.html

Git-Secrets – Prevents You From Committing Secrets And Credentials Into Git Repositories

Prevents you from committing passwords and other sensitive information to a git repository.Synopsisgit secrets –scan [-r|–recursive] [–cached] [–no-index] [–untracked] […]git secrets –scan-historygit secrets –install [-f|–force] [<target-directory>]git secrets –list [–global]git secrets –add [-a|–allowed] [-l|–literal] [–global] <pattern>git secrets –add-provider [–global] <command> [arguments…]git secrets –register-aws [–global]git secrets –aws-provider [<credentials-file>] Descriptiongit-secrets scans commits, commit messages, and –no-ff merges to prevent adding secrets into your git repositories. If a commit, commit message, or any commit in a –no-ff merge history matches one of your configured prohibited regular expression patterns, then the commit is rejected. Installing git-secretsgit-secrets must be placed somewhere in your PATH so that it is picked up by git when running git secrets. You can use install target of the provided Makefile to install git secrets and the man page. You can customize the install path using the PREFIX and MANPREFIX variables.make installOr, installing with Homebrew (for OS X users).brew install git-secretsWarningYou’re not done yet! You MUST install the git hooks for every repo that you wish to use with git secrets –install.Here’s a quick example of how to ensure a git repository is scanned for secrets on each commit:cd /path/to/my/repogit secrets –installgit secrets –register-aws Options Operation ModesEach of these options must appear first on the command line.–installInstalls hooks for a repository. Once the hooks are installed for a git repository, commits and non-ff merges for that repository will be prevented from committing secrets.–scanScans one or more files for secrets. When a file contains a secret, the matched text from the file being scanned will be written to stdout and the script will exit with a non-zero RC. Each matched line will be written with the name of the file that matched, a colon, the line number that matched, a colon, and then the line of text that matched. If no files are provided, all files returned by git ls-files are scanned.–scan-historyScans repository including all revisions. When a file contains a secret, the matched text from the file being scanned will be written to stdout and the script will exit with a non-zero RC. Each matched line will be written with the name of the file that matched, a colon, the line number that matched, a colon, and then the line of text that matched.–listLists the git-secrets configuration for the current repo or in the global git config.–addAdds a prohibited or allowed pattern.–add-providerRegisters a secret provider. Secret providers are executables that when invoked outputs prohibited patterns that git-secrets should treat as prohibited.–register-awsAdds common AWS patterns to the git config and ensures that keys present in ~/.aws/credentials are not found in any commit. The following checks are added:AWS Access Key ID via [A-Z0-9]{20}AWS Secret Access Key assignments via “:" or "=" surrounded by optional quotesAWS account ID assignments via ":" or "=" surrounded by optional quotesAllowed patterns for example AWS keys (AKIAIOSFODNN7EXAMPLE and wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY)Enables using ~/.aws/credentials to scan for known credentials.NoteWhile the patterns registered by this command should catch most instances of AWS credentials, these patterns are not guaranteed to catch them all. git-secrets should be used as an extra means of insurance — you still need to do your due diligence to ensure that you do not commit credentials to a repository.–aws-providerSecret provider that outputs credentials found in an INI file. You can optionally provide the path to an ini file. Options for –install-f, –forceOverwrites existing hooks if present.<target-directory>When provided, installs git hooks to the given directory. The current directory is assumed if <target-directory> is not provided.If the provided <target-directory> is not in a Git repository, the directory will be created and hooks will be placed in <target-directory>/hooks. This can be useful for creating Git template directories using with git init –template <target-directory>.You can run git init on a repository that has already been initialized. >From the git init documentation:>From the git documentation: Running git init in an existing repository is safe. It will not overwrite things that are already there. The primary reason for rerunning git init is to pick up newly added templates (or to move the repository to another place if –separate-git-dir is given).The following git hooks are installed:pre-commit: Used to check if any of the files changed in the commit use prohibited patterns.commit-msg: Used to determine if a commit message contains a prohibited patterns.prepare-commit-msg: Used to determine if a merge commit will introduce a history that contains a prohibited pattern at any point. Please note that this hook is only invoked for non fast-forward merges.NoteGit only allows a single script to be executed per hook. If the repository contains Debian style subdirectories like pre-commit.d and commit-msg.d, then the git hooks will be installed into these directories, which assumes that you’ve configured the corresponding hooks to execute all of the scripts found in these directories. If these git subdirectories are not present, then the git hooks will be installed to the git repo’s .git/hooks directory. ExamplesInstall git hooks to the current directory:cd /path/to/my/repositorygit secrets –installInstall git hooks to a repository other than the current directory:git secrets –install /path/to/my/repositoryCreate a git template that has git-secrets installed, and then copy that template into a git repository:git secrets –install ~/.git-templates/git-secretsgit init –template ~/.git-templates/git-secretsOverwrite existing hooks if present:git secrets –install -f Options for –scan-r, –recursiveScans the given files recursively. If a directory is encountered, the directory will be scanned. If -r is not provided, directories will be ignored.-r cannot be used alongside –cached, –no-index, or –untracked.–cachedSearches blobs registered in the index file.–no-indexSearches files in the current directory that is not managed by Git.–untrackedIn addition to searching in the tracked files in the working tree, –scan also in untracked files.<files>…The path to one or more files on disk to scan for secrets.If no files are provided, all files returned by git ls-files are scanned. ExamplesScan all files in the repo:git secrets –scanScans a single file for secrets:git secrets –scan /path/to/fileScans a directory recursively for secrets:git secrets –scan -r /path/to/directoryScans multiple files for secrets:git secrets –scan /path/to/file /path/to/other/fileYou can scan by globbing:git secrets –scan /path/to/directory/*Scan from stdin:echo ‘hello!’ | git secrets –scan – Options for –list–globalLists only git-secrets configuration in the global git config. Options for –add–globalAdds patterns to the global git config-l, –literalEscapes special regular expression characters in the provided pattern so that the pattern is searched for literally.-a, –allowedMark the pattern as allowed instead of prohibited. Allowed patterns are used to filter our false positives.<pattern>The regex pattern to search. ExamplesAdds a prohibited pattern to the current repo:git secrets –add ‘[A-Z0-9]{20}’Adds a prohibited pattern to the global git config:git secrets –add –global ‘[A-Z0-9]{20}’Adds a string that is scanned for literally (+ is escaped):git secrets –add –literal ‘foo+bar’Add an allowed pattern:git secrets –add -a ‘allowed pattern’ Options for –register-aws–globalAdds AWS specific configuration variables to the global git config. Options for –aws-provider[<credentials-file>]If provided, specifies the custom path to an INI file to scan. If not provided, ~/.aws/credentials is assumed. Options for –add-provider–globalAdds the provider to the global git config.<command>Provider command to invoke. When invoked the command is expected to write prohibited patterns separated by new lines to stdout. Any extra arguments provided are passed on to the command. ExamplesRegisters a secret provider with arguments:git secrets –add-provider — git secrets –aws-providerCats secrets out of a file:git secrets –add-provider — cat /path/to/secret/file/patterns Defining prohibited patternsegrep compatible regular expressions are used to determine if a commit or commit message contains any prohibited patterns. These regular expressions are defined using the git config command. It is important to note that different systems use different versions of egrep. For example, when running on OS X, you will use a different version of egrep than when running on something like Ubuntu (BSD vs GNU).You can add prohibited regular expression patterns to your git config using git secrets –add <pattern>. Ignoring false-positivesSometimes a regular expression might match false positives. For example, git commit SHAs look a lot like AWS access keys. You can specify many different regular expression patterns as false positives using the following command:git secrets –add –allowed ‘my regex pattern’You can also add regular expressions patterns to filter false positives to a .gitallowed file located in the repository’s root directory. Lines starting with # are skipped (comment line) and empty lines are also skipped.First, git-secrets will extract all lines from a file that contain a prohibited match. Included in the matched results will be the full path to the name of the file that was matched, followed ‘:’, followed by the line number that was matched, followed by the entire line from the file that was matched by a secret pattern. Then, if you’ve defined allowed regular expressions, git-secrets will check to see if all of the matched lines match at least one of your registered allowed regular expressions. If all of the lines that were flagged as secret are canceled out by an allowed match, then the subject text does not contain any secrets. If any of the matched lines are not matched by an allowed regular expression, then git-secrets will fail the commit/merge/message.ImportantJust as it is a bad practice to add prohibited patterns that are too greedy, it is also a bad practice to add allowed patterns that are too forgiving. Be sure to test out your patterns using ad-hoc calls to git secrets –scan $filename to ensure they are working as intended. Secret providersSometimes you want to check for an exact pattern match against a set of known secrets. For example, you might want to ensure that no credentials present in ~/.aws/credentials ever show up in a commit. In these cases, it’s better to leave these secrets in one location rather than spread them out across git repositories in git configs. You can use "secret providers" to fetch these types of credentials. A secret provider is an executable that when invoked outputs prohibited patterns separated by new lines.You can add secret providers using the –add-provider command:git secrets –add-provider — git secrets –aws-providerNotice the use of –. This ensures that any arguments associated with the provider are passed to the provider each time it is invoked when scanning for secrets. Example walkthroughLet’s take a look at an example. Given the following subject text (stored in /tmp/example):This is a test!password=ex@mplepasswordpassword=******More test…And the following registered patterns:git secrets –add ‘password\s*=\s*.+’git secrets –add –allowed –literal ‘ex@mplepassword’Running git secrets –scan /tmp/example, the result will result in the following error output:/tmp/example:3:password=******[ERROR] Matched prohibited patternPossible mitigations:- Mark false positives as allowed using: git config –add secrets.allowed …- List your configured patterns: git config –get-all secrets.patterns- List your configured allowed patterns: git config –get-all secrets.allowed- Use –no-verify if this is a one-time false positiveBreaking this down, the prohibited pattern value of password\s*=\s*.+ will match the following lines:/tmp/example:2:password=ex@mplepassword/tmp/example:3:password=******…But the first match will be filtered out due to the fact that it matches the allowed regular expression of ex@mplepassword. Because there is still a remaining line that did not match, it is considered a secret.Because that matching lines are placed on lines that start with the filename and line number (e.g., /tmp/example:3:…), you can create allowed patterns that take filenames and line numbers into account in the regular expression. For example, you could whitelist an entire file using something like:git secrets –add –allowed ‘/tmp/example:.*’git secrets –scan /tmp/example && echo $?# Outputs: 0Alternatively, you could whitelist a specific line number of a file if that line is unlikely to change using something like the following:git secrets –add –allowed ‘/tmp/example:3:.*’git secrets –scan /tmp/example && echo $?# Outputs: 0Keep this in mind when creating allowed patterns to ensure that your allowed patterns are not inadvertantly matched due to the fact that the filename is included in the subject text that allowed patterns are matched against. Skipping validationUse the –no-verify option in the event of a false-positive match in a commit, merge, or commit message. This will skip the execution of the git hook and allow you to make the commit or merge. Download Git-Secrets

Link: http://feedproxy.google.com/~r/PentestTools/~3/xSZcv6J7Nwo/git-secrets-prevents-you-from.html

Cloud Custodian – Rules Engine For Cloud Security, Cost Optimization, And Governance, DSL In Yaml For Policies To Query, Filter, And Take Actions On Resources

Cloud Custodian is a rules engine for AWS fleet management. It allows users to define policies to enable a well managed cloud infrastructure, that’s both secure and cost optimized. It consolidates many of the adhoc scripts organizations have into a lightweight and flexible tool, with unified metrics and reporting.Custodian can be used to manage AWS accounts by ensuring real time compliance to security policies (like encryption and access requirements), tag policies, and cost management via garbage collection of unused resources and off-hours resource management.Custodian policies are written in simple YAML configuration files that enable users to specify policies on a resource type (EC2, ASG, Redshift, etc) and are constructed from a vocabulary of filters and actions.It integrates with AWS Lambda and AWS Cloudwatch events to provide for real time enforcement of policies with builtin provisioning of the Lambdas, or as a simple cron job on a server to execute against large existing fleets.“Engineering the Next Generation of Cloud Governance” by @drewfirment FeaturesComprehensive support for AWS services and resources (> 100), along with 400+ actions and 300+ filters to build policies with.Supports arbitrary filtering on resources with nested boolean conditions.Dry run any policy to see what it would do.Automatically provisions AWS Lambda functions, AWS Config rules, and Cloudwatch event targets for real-time policies.AWS Cloudwatch metrics outputs on resources that matched a policyStructured outputs into S3 of which resources matched a policy.Intelligent cache usage to minimize api calls.Battle-tested – in production on some very large AWS accounts.Supports cross-account usage via STS role assumption.Supports integration with custom/user supplied Lambdas as actions.Supports both Python 2.7 and Python 3.6 (beta) Lambda runtimes Quick Install$ virtualenv –python=python2 custodian$ source custodian/bin/activate(custodian) $ pip install c7n UsageFirst a policy file needs to be created in YAML format, as an example:policies:- name: remediate-extant-keys description: | Scan through all s3 buckets in an account and ensure all objects are encrypted (default to AES256). resource: s3 actions: – encrypt-keys- name: ec2-require-non-public-and-encrypted-volumes resource: ec2 description: | Provision a lambda and cloud watch event target that looks at all new instances and terminates those with unencrypted volumes. mode: type: cloudtrail events: – RunInstances filters: – type: ebs key: Encrypted value: false actions: – terminate- name: tag-compliance resource: ec2 description: | Schedule a resource that does not meet tag compliance policies to be stopped in four days. filters: – State.Name: running – “tag:Environment": absent – "tag:AppId": absent – or: – "tag:OwnerContact": absent – "tag:DeptID": absent actions: – type: mark-for-op op: stop days: 4Given that, you can run Cloud Custodian with:# Validate the configuration (note this happens by default on run)$ custodian validate policy.yml# Dryrun on the policies (no actions executed) to see what resources# match each policy.$ custodian run –dryrun -s out policy.yml# Run the policy$ custodian run -s out policy.ymlCustodian supports a few other useful subcommands and options, including outputs to S3, Cloudwatch metrics, STS role assumption. Policies go together like Lego bricks with actions and filters.Consult the documentation for additional information, or reach out on gitter. Get InvolvedMailing List – https://groups.google.com/forum/#!forum/cloud-custodianGitter – https://gitter.im/capitalone/cloud-custodian Additional ToolsThe Custodian project also develops and maintains a suite of additional tools here https://github.com/capitalone/cloud-custodian/tree/master/tools:SalactusScale out s3 scanning.MailerA reference implementation of sending messages to users to notify them.TrailDBCloudtrail indexing and timeseries generation for dashboardingLogExporterCloud watch log exporting to s3IndexIndexing of custodian metrics and outputs for dashboardingSentryLog parsing for python tracebacks to integrate with https://sentry.io/welcome/Download Cloud-Custodian

Link: http://feedproxy.google.com/~r/PentestTools/~3/UWXPInoFoI8/cloud-custodian-rules-engine-for-cloud.html

Security Monkey – Tool To Monitors Your AWS And GCP Accounts For Policy Changes And Alerts On Insecure Configurations

Security Monkey monitors your AWS and GCP accounts for policy changes and alerts on insecure configurations. Support is available for OpenStack public and private clouds. Security Monkey can also watch and monitor your GitHub organizations, teams, and repositories.It provides a single UI to browse and search through all of your accounts, regions, and cloud services. The monkey remembers previous states and can show you exactly what changed, and when.Security Monkey can be extended with custom account types, custom watchers, custom auditors, and custom alerters.It works on CPython 2.7. It is known to work on Ubuntu Linux and OS X.Project resourcesSecurity Monkey ArchitectureQuickstartUser GuideUpgradingChangelogSource codeIssue trackerGitter.im Chat RoomCloudAuxPolicyUniverseTroubleshootingInstance DiagramThe components that make up Security Monkey are as follows (not AWS specific): Access DiagramSecurity Monkey accesses accounts to scan via credentials it is provided (“Role Assumption" where available). Download Security Monkey

Link: http://feedproxy.google.com/~r/PentestTools/~3/5pleW40uQyc/security-monkey-tool-to-monitors-your.html

AWS Key Disabler – A Small Lambda Script That Will Disable Access Keys Older Than A Given Amount Of Days

The AWS Key disabler is a Lambda Function that disables AWS IAM User Access Keys after a set amount of time in order to reduce the risk associated with old access keys.AWS Lambda ArchitectureSysOps Output for EndUserDeveloper ToolchainCurrent LimitationsA report containing the output (json) of scan will be sent to a single defined sysadmin account, refer to the report_to attribute in the /grunt/package.json build configuration file.Keys are only disabled, not deleted nor replacedPrerequisitesThis script requires the following components to run.Node.js with NPM installed https://nodejs.org/en/Gruntjs installed http://gruntjs.com/AWSCLI commandline tool installed https://aws.amazon.com/cli/It also assumes that you have an AWS account with SES enabled, ie domain verified and sandbox mode removed.Installation instructionsThese instructions are for OSX. Your mileage may vary on Windows and other *nix.Grab yourself a copy of this scriptNavigate into the /grunt folderSetup the Grunt task runner, e.g. install its deps: npm installFill in the following information in /grunt/package.json Set the aws_account_number value to your AWS account id found on https://portal.aws.amazon.com/gp/aws/manageYourAccountSet the first_warning and last_warning to the age that the key has to be in days to trigger a warning. These limits trigger an email send to report_toSet the expiry to the age in days when the key expires. At this age the key is disabled and an email is triggered to report_to notifying this changeSet the serviceaccount to the account username you want the script to ignoreSet the exclusiongroup to the name of a group assigned to users you want the script to ignore.Set the send_completion_report value to True to enable email delivery via SESSet the report_to value to the email address you’d like to receive deletion reports toSet the report_from value to the email address you’d like to use as the sender address for deletion reports. Note that the domain for this needs to be verified in AWS SES.Set the deployment_region to a region that supports Lambda. 10 Set the email_region to the region that supports SES. Also ensure that the region has SES sandbox mode disabled.See the AWS Region table for support https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/Ensure you can successfully connect to AWS from the CLI, eg run aws iam get-user to verify successful connectionfrom the /grunt directory run grunt bumpup && grunt deployLambda to bump your version number and perform a build/deploy of the Lambda function to the selected regionInvoke the Lambda Function manually from the commandline using the AWSCLIExecute the lambda function by name, AccessKeyRotation, logging the output of the scan to a file called scan.report.log:aws lambda invoke –function-name AccessKeyRotation scan.report.log –region us-east-1{ “StatusCode": 200}Use jq to render the contents of the scan.report.log to the console:jq ‘.’ scan.report.log{ "reportdate": "2016-06-26 10:37:24.071091", "users": [ { "username": "TestS3User", "userid": "1", "keys": [ { "age": 72, "changed": false, "state": "key is already in an INACTIVE state", "accesskeyid": "**************Q3GA1" }, { "age": 12, "changed": false, "state": "key is still young", "accesskeyid": "**************F3AA2" } ] }, { "username": "BlahUser22", "userid": "2", "keys": [] }, { "username": "LambdaFake1", "userid": "3", "keys": [ { "age": 23, "changed": false, "state": "key is due to expire in 1 week (7 days)", "accesskeyid": "**************DFG12" }, { "age": 296, "changed": false, "state": "key is already in an INACTIVE state", "accesskeyid": "**************4ZASD" } ] }, { "username": "apiuser49", "userid": "4", "keys": [ { "age": 30, "changed": true, "state": "key is now EXPIRED! Changing key to INACTIVE state", "accesskeyid": "**************ER2E2" }, { "age": 107, "changed": false, "state": "key is already in an INACTIVE state", "accesskeyid": "**************AWQ4K" } ] }, { "username": "UserEMRKinesis", "userid": "5", "keys": [ { "age": 30, "changed": false, "state": "key is now EXPIRED! Changing key to INACTIVE state", "accesskeyid": "**************MGB41A" } ] }, { "username": "CDN-Drupal", "userid": "6", "keys": [ { "age": 10, "changed": false, "state": "key is still young", "accesskeyid": "**************ZDSQ5A" }, { "age": 5, "changed": false, "state": "key is still young", "accesskeyid": "**************E3ODA" } ] }, { "username": "ChocDonutUser1", "userid": "7", "keys": [ { "age": 59, "changed": false, "state": "key is already in an INACTIVE state", "accesskeyid": "**************CSA123" } ] }, { "username": "ChocDonut2", "userid": "8", "keys": [ { "age": 60, "changed": false, "state": "key is already in an INACTIVE state", "accesskeyid": "**************FDGD2" } ] }, { "username": "admin.skynet@cyberdyne.systems.com", "userid": "9", "keys": [ { "age": 45, "changed": false, "state": "key is already in an INACTIVE state", "accesskeyid": "**************BLQ5GJ" }, { "age": 71, "changed": false, "state": "key is already in an INACTIVE state", "accesskeyid": "**************GJFF53" } ] } ]}Additional configuration optionYou can choose to set the message used for each warning and the final disabling by changing the values under key_disabler.keystates..messageYou can change the length of masking under key_disabler.mask_accesskey_length. The access keys are 20 characters in length.TroubleshootingThis script is provided as is. We are happy to answer questions as time allows but can’t give any promises.If things don’t work ensure that:You can authenticate successfully against AWS using the AWSCLI commandline toolSES is not in sandbox mode and the sender domain has been verifiedThe selected region provides both Lambda and SES https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/Bonus PointsOnce the Lambda Function has been successfully deployed – the following commands can be performed:aws lambda list-functionsopenssl dgst -binary -sha256 ..\Releases\AccessKeyRotationPackage.1.0.18.zip | openssl base64aws lambda invoke –function-name AccessKeyRotation report.log –region us-east-1jq ‘.’ report.logjq ‘.users[] | select(.username=="johndoe")’ report.logjq ‘.’ report.log | grep age | cut -d’:’ -f2 | sort -nBonus Bonus Pointsjq ‘def maximal_by(f): (map(f) | max) as $mx | .[] | select(f == $mx); .users | maximal_by(.keys[].age)’ report.logjq ‘def minimal_by(f): (map(f) | min) as $mn | .[] | select(f == $mn); .users | minimal_by(.keys[].age)’ report.logDownload Aws-Key-Disabler

Link: http://feedproxy.google.com/~r/PentestTools/~3/9ua5AsUDhtM/aws-key-disabler-small-lambda-script.html