Imperva Integration With AWS Security Hub: Expanding Customer Security Visibility

This article explains how Imperva application security integrates with AWS Security Hub to give customers better visibility and feedback on the security status of their AWS hosted applications. Securing AWS Applications Cost reduction, simplified operations, and other benefits are driving organizations to move more and more applications onto AWS delivery platforms; because all of the […]
The post Imperva Integration With AWS Security Hub: Expanding Customer Security Visibility appeared first on Blog.

Link: http://feedproxy.google.com/~r/Imperviews/~3/WJHsuSIFeXs/

Pacu – The AWS Exploitation Framework, Designed For Testing The Security Of Amazon Web Services Environments

Pacu is an open source AWS exploitation framework, designed for offensive security testing against cloud environments. Created and maintained by Rhino Security Labs, Pacu allows penetration testers to exploit configuration flaws within an AWS account, using modules to easily expand its functionality. Current modules enable a range of attacks, including user privilege escalation, backdooring of IAM users, attacking vulnerable Lambda functions, and much more.InstallationPacu is a fairly lightweight program, as it requires only Python3.5+ and pip3 to install a handful of Python libraries. Running install.sh will check your Python version and ensure all Python packages are up to date.Quick Installation > git clone https://github.com/RhinoSecurityLabs/pacu > cd pacu > bash install.sh > python3 pacu.pyFor a more detailed and user-friendly set of user instructions, please check out the Wiki’s installation guide.Pacu’s Modular PowerPacu uses a range of plug-in modules to assist an attacker in enumeration, privilege escalation, data exfiltration, service exploitation, and log manipulation within AWS environments. At present, Pacu has 36 modules for executing AWS attacks, but we’ll be working hard to add more modules in the future, and suggestions for new modules (or even contributions of whole completed modules) are welcome.In order to keep pace with ongoing AWS product developments, we’ve designed Pacu from the ground up with extensibility in mind. A common syntax and data structure keeps modules easy to build and expand on – no need to specify AWS regions or make redundant permission checks between modules. A local SQLite database is used to manage and manipulate retrieved data, minimizing API calls (and associated logs). Reporting and attack auditing is also built into the framework; Pacu assists the documentation process through command logging and exporting, helping build a timeline for the testing process.We’ll be working on improve Pacu’s core capabilities and building out a well-documented ecosystem so that cybersecurity researchers and developers can make new modules quickly and easily.CommunityWe’re always happy to get bugs reports in the Pacu framework itself, as well as testing and feedback on different modules, and generally critical feedback to help refine the framework. We hope to see this grow into a key open-source tool for testing AWS security, and we need your help to make that happen! Any support towards this effort through use, testing, improvement, or just by spreading the word, would be very much appreciated.If you’re interested in contributing directly to the Pacu Framework itself, please read our contribution guidelines for code conventions and git flow notes.Developing Pacu ModulesIf you’re interested in writing your own modules for Pacu, check out our Module Development wiki page. As you develop new capabilities please reach out to us — we’d love to add your new modules into the core collection that comes with Pacu.Pacu Framework Development GoalsImprove interface formattingDatabase forward-migrations and version tracking”Attack Playbooks" to allow for easier use of complex module execution chainsColored console outputModule Dry-Run functionalityAllow use of standalone config filesPlugin architecture improvementsNotesPacu is officially supported in OSX and Linux.Pacu is Open-Source Software, and is distributed with a BSD-3-Clause License.Getting StartedThe first time Pacu is launched, you will be prompted to start and name a new session. This session will be used to store AWS key pairs, as well as any data obtained from running various modules. You can have any number of different sessions in Pacu, each with their own sets of AWS keys and data, and resume a session at any time (though a restart is currently required to switch between sessions).Modules require an AWS key, which grant you minimal access to an AWS environment and are comprised of an access key ID and a secret access key. To set your session’s keys, use the set_keys command, and then follow the prompts to supply a key alias (nickname for reference), an AWS access key ID, an AWS secret access key, and an AWS session token (if you are using one).If you are ever stuck, help will bring up a list of commands that are available.Basic Commands in Paculist will list the available modules for the regions that were set in the current session.help module_name will return the applicable help information for the specified module.run module_name will run the specified module with its default parameters.run module_name –regions eu-west-1,us-west-1 will run the specified module against the eu-west-1 and us-west-1 regions (for modules that support the –regions argument)Submitting Requests / Bug ReportsReport vulnerabilities in Pacu directly to us via email: .Pacu creates error logs within each session’s folder, as well as a global error log for out-of-session errors which is created in the main directory. If you can, please include these logs with your bug reports, as it will dramatically simplify the debugging process.If you have a feature request, an idea, or a bug to report, please submit them here.Please include a description sufficient to reproduce the bug you found, including tracebacks and reproduction steps, and check for other reports of your bug before filing a new bug report. Don’t submit duplicates.WikiFor walkthroughs and full documentation, please visit the Pacu wiki.Contact UsWe’d love to hear from you, whatever the reason. Shoot us an email at anytime!Disclaimers, and the AWS Acceptable Use PolicyTo the best of our knowledge Pacu’s capabilities are compliant with the AWS Acceptable Use Policy, but as a flexible and modular tool we cannot guarantee this will be true in every situation. It is entirely your responsibility to ensure that how you use Pacu is compliant with the AWS Acceptable Use Policy.Depending on what AWS services you use and what your planned testing entails, you may need to request authorization from Amazon prior to actually running Pacu against your infrastructure. Determining whether or not such authorization is necessary is your responsibility.As with any penetration testing tool, it is your responsibility to get proper authorization before using Pacu outside of your own environment.Pacu is software that comes with absolutely no warranties whatsoever. By using Pacu, you take full responsibility for any and all outcomes that result.Download Pacu

Link: http://feedproxy.google.com/~r/PentestTools/~3/Hem0TkDOTrg/pacu-aws-exploitation-framework.html

Imperva and Amazon Partner to Help Mitigate Risks Associated With Cloud Migration

Helping our customers reduce the risks associated with migrating to the cloud, and preventing availability and security incidents, has been a major development focus for Imperva over the last several years.   Why the partnership matters Although cloud service providers take a host of IT management burdens off of your shoulders when using their platforms, […]
The post Imperva and Amazon Partner to Help Mitigate Risks Associated With Cloud Migration appeared first on Blog.

Link: http://feedproxy.google.com/~r/Imperviews/~3/hnQo06MIvTw/

AWS Lambda, Bleedingbit, and Cisco – Paul’s Security Weekly #581

AWS Security Best Practices, Masscan and massive address lists, Bleedingbit vulnerabilities, and Cisco Zero-Day exploited in the wild, ! All that and more, on this episode of Paul’s Security Weekly! Paul’s Stories Web Security Stats Show XSS & Outdated Software Are Major Problems AWS Security Best Practices: AWS Lambda Security Design for Failure Employee used […]
The post AWS Lambda, Bleedingbit, and Cisco – Paul’s Security Weekly #581 appeared first on Security Weekly.

Link: http://feedproxy.google.com/~r/securityweekly/Lviv/~3/m_WC_9coB3U/

hideNsneak – A CLI For Ephemeral Penetration Testing

This application assists in managing attack infrastructure for penetration testers by providing an interface to rapidly deploy, manage, and take down various cloud services. These include VMs, domain fronting, Cobalt Strike servers, API gateways, and firewalls.Black Hat Arsenal Video Demo Video – https://youtu.be/8YTYScLn7pAOverviewhideNsneak provides a simple interface that allows penetration testers to build ephemeral infrastructure — one that requires minimal overhead. hideNsneak can: deploy, destroy, and list Cloud instances via EC2 and Digital Ocean (Google Cloud, Azure, and Alibaba Cloud coming soon)API Gateway (AWS)Domain fronts via AWS Cloudfront and Google Cloud Functions (Azure CDN coming soon) Proxy through infrastructure Deploy C2 redirectors Send and receive files Port scanning via NMAP Remote installations of Burp Collab, Cobalt Strike, Socat, LetsEncrypt, GoPhish, and SQLMAP work with teams teams Running locallyA few disclosures for V 1.0:At this time, all hosts are assumed Ubuntu 16.04 Linux.Setup is done on your local system (Linux and Mac Only). In the future, we’re hoping to add on a docker container to decrease initial setup timeThe only vps providers currently setup are AWS and Digital OceanYou need to make sure that go is installed. Instructions can be found herethe GOPATH environment variable MUST be setCreate a new AWS S3 bucket in us-east-1 Ensure this is not public as it will hold your terraform statego get github.com/rmikehodges/hideNsneakcd $GOPATH/src/github.com/rmikehodges/hideNsneak./setup.shcp config/example-config.json config/config.json fill in the valuesaws_access_id, aws_secret_key, aws_bucket_name, public_key, private_key, ec2_user, and do_user are required at minimumall operators working on the same state must have config values filled in all the same fieldsprivate and public keys must be the same for each operatornow you can use the program by running ./hidensneak [command]Commands hidensneak help –> run this anytime to get available commands hidensneak instance deploy hidensneak instance destroy hidensneak instance list hidensneak api deploy hidensneak api destroy hidensneak api list hidensneak domainfront enable hidensneak domainfront disable hidensneak domainfront deploy hidensneak domainfront destroy hidensneak domainfront list hidensneak firewall add hidensneak firewall list hidensneak firewall delete hidensneak exec command -c hidensneak exec nmap hidensneak exec socat-redirect hidensneak exec cobaltstrike-run hidensneak exec collaborator-run hidensneak socks deploy hidensneak socks list hidensneak socks destroy hidensneak socks proxychains hidensneak socks socksd hidensneak install burp hidensneak install cobaltstrike hidensneak install socat hidensneak install letsencrypt hidensneak install gophish hidensneak install nmap hidensneak install sqlmap hidensneak file push hidensneak file pull For all commands, you can run –help after any of them to get guidance on what flags to use.Organization_terraform –> terraform modules_ansible –> ansible roles and playbooks_assets –> random assets for the beauty of this project_cmd –> frontend interface package_deployer –> backend commands and structsmain.go –> where the magic happensIAM PermissionsGoogle Domain FrontingApp Engine API EnabledCloud Functions API EnabledProject editor or higher permissionsMiscellaneousA default security group hideNsneak is made in all AWS regions that is full-open. All instances are configured with iptables to only allow port 22/tcp upon provisioning.If your program starts throwing terraform errors indicating a resource is not found, then you may need to remove the problematic terraform resources. You can do this by running the following:cd $GOPATH/src/github.com/rmikehodges/hideNsneak/terraformterraform state rm This resource will need to be cleaned up manually if it still exists.TroubleshootingError: configuration for module name here is not present; a provider configuration block is required for all operationsThis is usually due to artifacts being left in the state from old deployments. Below are instructions on how to remove those artifacts from your state. If they are live resources, they will need to be manually destroyed via the cloud provider’s administration panel.cd $GOPATH/src/github.com/rmikehodges/hideNsneak/terraformterraform state rm <module or resource name>Error: Error locking state: Error acquiring the state lock: ConditionalCheckFailedException: The conditional request failed status code: 400, request id: P7BUM7NA56LQEJQC20A3SE2SOVVV4KQNSO5AEMVJF66Q9ASUAAJG Lock Info: ID: 4919d588-6b29-4aa7-d917-2bcb67c14ab4If this does not go away after another user has finished deploying then it is usually due to to Terraform not automatically unlocking your state in the face of errors. This can be fixed by running the following:terraform force-unlock <ID> $GOPATH/src/github.com/rmikehodges/hideNsneak/terraformNote that this will unlock the state so it may have an adverse affect on any other writes happening in the state so make sure your other users are not actively deploying/destroying anything when you run this.Download hideNsneak

Link: http://feedproxy.google.com/~r/PentestTools/~3/Roco87TKR5c/hidensneak-cli-for-ephemeral.html

PMapper – A Tool For Quickly Evaluating IAM Permissions In AWS

A project to speed up the process of reviewing an AWS account’s IAM configuration.PurposeThe goal of the AWS IAM auth system is to apply and enforce access controls on actions and resources in AWS. This tool helps identify if the policies in place will accomplish the intents of the account’s owners.AWS already has tooling in place to check if policies attached to a resource will permit an action. This tool builds on that functionality to identify other potential paths for a user to get access to a resource. This means checking for access to other users, roles, and services as ways to pivot.How to UseDownload this repository and install its dependencies with pip install -r requirements.txt .Ensure you have graphviz installed on your host.Setup an IAM user in your AWS account with a policy that grants the necessary permission to run this tool (see the file mapper-policy.json for an example). The ReadOnlyAccess managed policy works for this purpose. Grab the access keys created for this user.In the AWS CLI, set up a profile for that IAM user with the command: aws configure –profile where <profile_name> is a unique name.Run the command python pmapper.py –profile <profile_name> graph to begin pulling data about your account down to your computer.GraphingPrincipal Mapper has a graph subcommand, which does the heavy work of going through each principal in an account and finding any other principals it can access. The results are stored at ~/.principalmap and used by other subcommands.QueryingPrincipal Mapper has a query subcommand that runs a user-defined query. The queries can check if one or more principals can do a given action with a given resource. The supported queries are:”can <Principal> do <Action> [with <Resource>]""who can do <Action> [with <Resource>]""preset <preset_query_name> <preset_query_args>"The first form checks if a principal, or any other principal accessible to it, could perform an action with a resource (default wildcard). The second form enumerates all principals that are able to perform an action with a resource.Note the quotes around the full query, that’s so the argument parser knows to take the whole string.Note that <Principal> can either be the full ARN of a principal or the last part of that ARN (user/… or role/…).PresetsThe existing preset is priv_esc or change_perms, which have the same function. They describe which principals have the ability to change their own permissions. If a principal is able to change their own perms, then it effectively has unlimited perms.VisualizingThe visualize subcommand produces a DOT and SVG file that represent the nodes and edges that were graphed.To create the DOT and SVG files, run the command: python pmapper.py visualizeCurrently the output is a directed graph, which collates all the edges with the same source and destination nodes. It does not draw edges where the source is an admin. Nodes for admins are colored blue. Nodes for users with the ability to access admins are colored red (potential priv-esc risk).Sample OutputPulling a graphesteringer@ubuntu:~/Documents/projects/Skywalker$ python pmapper.py graphUsing profile: skywalkerPulling data for account [REDACTED]Using principal with ARN arn:aws:iam::[REDACTED]:user/TestingSkywalker[+] Starting EC2 checks.[+] Starting IAM checks.[+] Starting Lambda checks.[+] Starting CloudFormation checks.[+] Completed CloudFormation checks.[+] Completed EC2 checks.[+] Completed Lambda checks.[+] Completed IAM checks.Created an AWS Graph with 16 nodes and 53 edges[NODES]AWSNode("arn:aws:iam::[REDACTED]:user/AdminUser", properties={u’is_admin’: True, u’type’: u’user’})AWSNode("arn:aws:iam::[REDACTED]:user/EC2Manager", properties={u’is_admin’: False, u’type’: u’user’})AWSNode("arn:aws:iam::[REDACTED]:user/LambdaDeveloper", properties={u’is_admin’: False, u’type’: u’user’})AWSNode("arn:aws:iam::[REDACTED]:user/LambdaFullAccess", properties={u’is_admin’: False, u’type’: u’user’})AWSNode("arn:aws:iam::[REDACTED]:user/PowerUser", properties={u’is_admin’: False, u’rootstr’: u’arn:aws:iam::[REDACTED]:root’, u’type’: u’user’})AWSNode("arn:aws:iam::[REDACTED]:user/S3ManagementUser", properties={u’is_admin’: False, u’type’: u’user’})AWSNode("arn:aws:iam::[REDACTED]:user/S3ReadOnly", properties={u’is_admin’: False, u’type’: u’user’})AWSNode("arn:aws:iam::[REDACTED]:user/TestingSkywalker", properties={u’is_admin’: False, u’type’: u’user’})AWSNode("arn:aws:iam::[REDACTED]:role/AssumableRole", properties={u’is_admin’: False, u’type’: u’role’, u’name’: u’AssumableRole’})AWSNode("arn:aws:iam::[REDACTED]:role/EC2-Fleet-Manager", properties={u’is_admin’: False, u’type’: u’role’, u’name’: u’EC2-Fleet-Manager’})AWSNode("arn:aws:iam::[REDACTED]:role/EC2Role-Admin", properties={u’is_admin’: True, u’type’: u’role’, u’name’: u’EC2Role-Admin’})AWSNode("arn:aws:iam::[REDACTED]:role/EC2WithS3ReadOnly", properties={u’is_admin’: False, u’type’: u’role’, u’name’: u’EC2WithS3ReadOnly’})AWSNode("arn:aws:iam::[REDACTED]:role/EMR-Service-Role", properties={u’is_admin’: False, u’type’: u’role’, u’name’: u’EMR-Service-Role’})AWSNode("arn:aws:iam::[REDACTED]:role/LambdaRole-S3ReadOnly", properties={u’is_admin’: False, u’type’: u’role’, u’name’: u’LambdaRole-S3ReadOnly’})AWSNode("arn:aws:iam::[REDACTED]:role/ReadOnlyWithLambda", properties={u’is_admin’: False, u’type’: u’role’, u’name’: u’ReadOnlyWithLambda’})AWSNode("arn:aws:iam::[REDACTED]:role/UpdateCredentials", properties={u’is_admin’: False, u’type’: u’role’, u’name’: u’UpdateCredentials’})[EDGES](0,1,’ADMIN’,’can use existing administrative privileges to access’)(0,2,’ADMIN’,’can use existing administrative privileges to access’)(0,3,’ADMIN’,’can use existing administrative privileges to access’)(0,4,’ADMIN’,’can use existing administrative privileges to access’)(0,5,’ADMIN’,’can use existing administrative privileges to access’)(0,6,’ADMIN’,’can use existing administrative privileges to access’)(0,7,’ADMIN’,’can use existing administrative privileges to access’)(0,8,’ADMIN’,’can use existing administrative privileges to access’)(0,9,’ADMIN’,’can use existing administrative privileges to access’)(0,10,’ADMIN’,’can use existing administrative privileges to access’)(0,11,’ADMIN’,’can use existing administrative privileges to access’)(0,12,’ADMIN’,’can use existing administrative privileges to access’)(0,13,’ADMIN’,’can use existing administrative privileges to access’)(0,14,’ADMIN’,’can use existing administrative privileges to access’)(0,15,’ADMIN’,’can use existing administrative privileges to access’)(10,0,’ADMIN’,’can use existing administrative privileges to access’)(10,1,’ADMIN’,’can use existing administrative privileges to access’)(10,2,’ADMIN’,’can use existing administrative privileges to access’)(10,3,’ADMIN’,’can use existing administrative privileges to access’)(10,4,’ADMIN’,’can use existing administrative privileges to access’)(10,5,’ADMIN’,’can use existing administrative privileges to access’)(10,6,’ADMIN’,’can use existing administrative privileges to access’)(10,7,’ADMIN’,’can use existing administrative privileges to access’)(10,8,’ADMIN’,’can use existing administrative privileges to access’)(10,9,’ADMIN’,’can use existing administrative privileges to access’)(10,11,’ADMIN’,’can use existing administrative privileges to access’)(10,12,’ADMIN’,’can use existing administrative privileges to access’)(10,13,’ADMIN’,’can use existing administrative privileges to access’)(10,14,’ADMIN’,’can use existing administrative privileges to access’)(10,15,’ADMIN’,’can use existing administrative privileges to access’)(1,9,’EC2_USEPROFILE’,’can create an EC2 instance and use an existing instance profile to access’)(1,10,’EC2_USEPROFILE’,’can create an EC2 instance and use an existing instance profile to access’)(1,11,’EC2_USEPROFILE’,’can create an EC2 instance and use an existing instance profile to access’)(4,9,’EC2_USEPROFILE’,’can create an EC2 instance and use an existing instance profile to access’)(4,10,’EC2_USEPROFILE’,’can create an EC2 instance and use an existing instance profile to access’)(4,11,’EC2_USEPROFILE’,’can create an EC2 instance and use an existing instance profile to access’)(3,13,’LAMBDA_CREATEFUNCTION’,’can create a Lambda function and pass an execution role to access’)(3,14,’LAMBDA_CREATEFUNCTION’,’can create a Lambda function and pass an execution role to access’)(3,15,’LAMBDA_CREATEFUNCTION’,’can create a Lambda function and pass an execution role to access’)(9,10,’EC2_USEPROFILE’,’can create an EC2 instance and use an existing instance profile to access’)(4,13,’LAMBDA_CREATEFUNCTION’,’can create a Lambda function and pass an execution role to access’)(9,11,’EC2_USEPROFILE’,’can create an EC2 instance and use an existing instance profile to access’)(4,8,’STS_ASSUMEROLE’,’can use STS to assume the role’)(4,14,’LAMBDA_CREATEFUNCTION’,’can create a Lambda function and pass an execution role to access’)(4,15,’LAMBDA_CREATEFUNCTION’,’can create a Lambda function and pass an execution role to access’)(15,0,’IAM_CREATEKEY’,’can create access keys with IAM to access’)(15,1,’IAM_CREATEKEY’,’can create access keys with IAM to access’)(15,2,’IAM_CREATEKEY’,’can create access keys with IAM to access’)(15,3,’IAM_CREATEKEY’,’can create access keys with IAM to access’)(15,4,’IAM_CREATEKEY’,’can create access keys with IAM to access’)(15,5,’IAM_CREATEKEY’,’can create access keys with IAM to access’)(15,6,’IAM_CREATEKEY’,’can create access keys with IAM to access’)(15,7,’IAM_CREATEKEY’,’can create access keys with IAM to access’)Querying with the graphesteringer@ubuntu:~/Documents/projects/Skywalker$ ./pmapper.py –profile skywalker query "who can do s3:GetObject with *"user/AdminUser can do s3:GetObject with *user/EC2Manager can do s3:GetObject with * through role/EC2Role-Admin user/EC2Manager can create an EC2 instance and use an existing instance profile to access role/EC2Role-Adminrole/EC2Role-Admin can do s3:GetObject with *user/LambdaFullAccess can do s3:GetObject with *user/PowerUser can do s3:GetObject with *user/S3ManagementUser can do s3:GetObject with *user/S3ReadOnly can do s3:GetObject with *user/TestingSkywalker can do s3:GetObject with *role/EC2-Fleet-Manager can do s3:GetObject with * through role/EC2Role-Admin role/EC2-Fleet-Manager can create an EC2 instance and use an existing instance profile to access role/EC2Role-Adminrole/EC2Role-Admin can do s3:GetObject with *role/EC2Role-Admin can do s3:GetObject with *role/EC2WithS3ReadOnly can do s3:GetObject with *role/EMR-Service-Role can do s3:GetObject with *role/LambdaRole-S3ReadOnly can do s3:GetObject with *role/UpdateCredentials can do s3:GetObject with * through user/AdminUser role/UpdateCredentials can create access keys with IAM to access user/AdminUseruser/AdminUser can do s3:GetObject with *Identifying Potential Privilege Escalationesteringer@ubuntu:~/Documents/projects/Skywalker$ ./pmapper.py –profile skywalker query "preset priv_esc user/PowerUser"Discovered a potential path to change privileges:user/PowerUser can change privileges because: user/PowerUser can access role/EC2Role-Admin because: user/PowerUser can create an EC2 instance and use an existing instance profile to access role/EC2Role-Admin and role/EC2Role-Admin can change its own privileges.Planned TODOsComplete and verify Python 3 support.Smarter control over rate of API requests (Queue, managing throttles).Better progress reporting.Validate and add more checks for obtaining credentials. Several services use service roles that grant the service permission to do an action within a user’s account. This could potentially allow a user to obtain access to additional privileges.Improving simulate calls (global conditions).Completing priv esc checks (editing attached policies, attaching to a group).Adding options for visualization (output type, edge collation).Adding more caching.Local policy evaluation?Cross-account subcommand(s).A preset to check if one principal is connected to another.Handling policies for buckets or keys with services like S3 or KMS when querying.Download PMapper

Link: http://feedproxy.google.com/~r/PentestTools/~3/Ifx-LagyHdo/pmapper-tool-for-quickly-evaluating-iam.html

CloudSploit Scans – AWS Security Scanning Checks

CloudSploit scans is an open-source project designed to allow detection of security risks in an AWS account. These scripts are designed to run against an AWS account and return a series of potential misconfigurations and security risks.InstallationEnsure that NodeJS is installed. If not, install it from here.git clone git@github.com:cloudsploit/scans.gitnpm installSetupTo begin using the scanner, edit the index.js file with your AWS key, secret, and optionally (for temporary credentials), a session token. You can also set a file containing credentials. To determine the permissions associated with your credentials, see the permissions section below. In the list of plugins in the exports.js file, comment out any plugins you do not wish to run. You can also skip entire regions by modifying the skipRegions array.You can also set the typical environment variables expected by the aws sdks, namely AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_SESSION_TOKEN.Cross Account RolesWhen using the hosted scanner, you’ll need to create a cross-account IAM role. Cross-account roles enable you to share access to your account with another AWS account using the same policy model that you’re used to. The advantage is that cross-account roles are much more secure than key-based access, since an attacker who steals a cross-account role ARN still can’t make API calls unless they also infiltrate the authorized AWS account.To create a cross-account role:Navigate to the IAM console.Click “Roles" and then "Create New Role".Provide a role name (suggested "cloudsploit").Select the "Role for Cross-Account Access" radio button.Click the "Select" button next to "Allows IAM users from a 3rd party AWS account to access this account."Enter 057012691312 for the account ID (this is the ID of CloudSploit’s AWS account).Copy the auto-generated external ID from the CloudSploit web page and paste it into the AWS IAM console textbox.Ensure that "Require MFA" is not selected.Click "Next Step".Select the "Security Audit" policy. Then click "Next Step" again.Click through to create the role.PermissionsThe scans require read-only permissions to your account. This can be done by adding the "Security Audit" AWS managed policy to your IAM user or role.Security Audit Managed Policy (Recommended)To configure the managed policy:Open the IAM Console.Find your user or role.Click the "Permissions" tab.Under "Managed Policy", click "Attach policy".In the filter box, enter "Security Audit"Select the "Security Audit" policy and save.Inline Policy (Not Recommended)If you’d prefer to be more restrictive, the following IAM policy contains the exact permissions used by the scan.WARNING: This policy will likely change as more plugins are written. If a test returns "UNKNOWN" it is likely missing a required permission. The preferred method is to use the "Security Audit" policy.{ "Version": "2012-10-17", "Statement": [ { "Action": [ "cloudfront:ListDistributions", "cloudtrail:DescribeTrails", "configservice:DescribeConfigurationRecorders", "configservice:DescribeConfigurationRecorderStatus", "ec2:DescribeInstances", "ec2:DescribeSecurityGroups", "ec2:DescribeAccountAttributes", "ec2:DescribeAddresses", "ec2:DescribeVpcs", "ec2:DescribeFlowLogs", "ec2:DescribeSubnets", "elasticloadbalancing:DescribeLoadBalancerPolicies", "elasticloadbalancing:DescribeLoadBalancers", "iam:GenerateCredentialReport", "iam:ListServerCertificates", "iam:ListGroups", "iam:GetGroup", "iam:GetAccountPasswordPolicy", "iam:ListUsers", "iam:ListUserPolicies", "iam:ListAttachedUserPolicies", "kms:ListKeys", "kms:DescribeKey", "kms:GetKeyRotationStatus", "rds:DescribeDBInstances", "rds:DescribeDBClusters", "route53domains:ListDomains", "s3:GetBucketVersioning", "s3:GetBucketLogging", "s3:GetBucketAcl", "s3:ListBuckets", "ses:ListIdentities", "ses:getIdentityDkimAttributes" ], "Effect": "Allow", "Resource": "*" } ]}RunningTo run a standard scan, showing all outputs and results, simply run:node index.jsOptional PluginsSome plugins may require additional permissions not outlined above. Since their required IAM permissions are not included in the SecurityAudit managed policy, these plugins are not included in the exports.js file by default. To enable these plugins, uncomment them from the exports.js file, if applicable, add the policies required to an inline IAM policy, and re-run the scan.ComplianceCloudSploit also supports mapping of its plugins to particular compliance policies. To run the compliance scan, use the –compliance flag. For example:node index.js –compliance=hipaaCloudSploit currently supports the following compliance mappings:HIPAAHIPAA scans map CloudSploit plugins to the Health Insurance Portability and Accountability Act of 1996.ArchitectureCloudSploit works in two phases. First, it queries the AWS APIs for various metadata about your account. This is known as the "collection" phase. Once all the necessary data has been collected, the result is passed to the second phase – "scanning." The scan uses the collected data to search for potential misconfigurations, risks, and other security issues. These are then provided as output.Writing a PluginCollection PhaseTo write a plugin, you must understand what AWS API calls your scan makes. These must be added to the collect.js file. This file determines the AWS API calls and the order in which they are made. For example:CloudFront: { listDistributions: { property: ‘DistributionList’, secondProperty: ‘Items’ }},This declaration tells the CloudSploit collection engine to query the CloudFront service using the listDistributions call and then save the results returned under DistributionList.Items.The second section in collect.js is postcalls, which is an array of objects defining API calls that rely on other calls being returned first. For example, if you need to first query for all EC2 instances, and then loop through each instance and run a more detailed call, you would add the EC2:DescribeInstances call in the first calls section and then add the more detailed call in postCalls setting it to rely on the output of DescribeInstances.An example:getGroup: { reliesOnService: ‘iam’, reliesOnCall: ‘listGroups’, filterKey: ‘GroupName’, filterValue: ‘GroupName’},This section tells CloudSploit to wait until the IAM:listGroups call has been made, and then loop through the data that is returned. The filterKey tells CloudSploit the name of the key from the original response, while filterValue tells it which property to set in the getGroup call filter. For example: iam.getGroup({GroupName:abc}) where abc is the GroupName from the returned list. CloudSploit will loop through each response, re-invoking getGroup for each element.Scanning PhaseAfter the data has been collected, it is passed to the scanning engine when the results are analyzed for risks. Each plugin must export the following:Exports the following:title (string): a user-friendly title for the plugincategory (string): the AWS category (EC2, RDS, ELB, etc.)description (string): a description of what the plugin doesmore_info (string): a more detailed description of the risk being tested forlink (string): an AWS help URL describing the service or risk, preferably with mitigation methodsrecommended_action (string): what the user should do to mitigate the risk foundrun (function): a function that runs the test (see below)Accepts a collection object via the run function containing the full collection object obtained in the first phase.Calls back with the results and the data source.Result CodesEach test has a result code that is used to determine if the test was successful and its risk level. The following codes are used:0: OKAY: No risks1: WARN: The result represents a potential misconfiguration or issue but is not an immediate risk2: FAIL: The result presents an immediate risk to the security of the account3: UNKNOWN: The results could not be determined (API failure, wrong permissions, etc.)Tips for Writing PluginsMany security risks can be detected using the same API calls. To minimize the number of API calls being made, utilize the cache helper function to cache the results of an API call made in one test for future tests. For example, two plugins: "s3BucketPolicies" and "s3BucketPreventDelete" both call APIs to list every S3 bucket. These can be combined into a single plugin "s3Buckets" which exports two tests called "bucketPolicies" and "preventDelete". This way, the API is called once, but multiple tests are run on the same results.Ensure AWS API calls are being used optimally. For example, call describeInstances with empty parameters to get all instances, instead of calling describeInstances multiple times looping through each instance name.Use async.eachLimit to reduce the number of simultaneous API calls. Instead of using a for loop on 100 requests, spread them out using async’s each limit.ExampleTo more clearly illustrate writing a new plugin, let’s consider the "IAM Empty Groups" plugin. First, we know that we will need to query for a list of groups via listGroups, then loop through each group and query for the more detailed set of data via getGroup.We’ll add these API calls to collect.js. First, under calls add:IAM: { listGroups: { property: ‘Groups’ }},The property tells CloudSploit which property to read in the response from AWS.Then, under postCalls, add:IAM: { getGroup: { reliesOnService: ‘iam’, reliesOnCall: ‘listGroups’, filterKey: ‘GroupName’, filterValue: ‘GroupName’ }},CloudSploit will first get the list of groups, then, it will loop through each one, using the group name to get more detailed info via getGroup.Next, we’ll write the plugin. Create a new file in the plugins/iam folder called emptyGroups.js (this plugin already exists, but you can create a similar one for the purposes of this example).In the file, we’ll be sure to export the plugin’s title, category, description, link, and more information about it. Additionally, we will add any API calls it makes:apis: [‘IAM:listGroups’, ‘IAM:getGroup’],In the run function, we can obtain the output of the collection phase from earlier by doing:var listGroups = helpers.addSource(cache, source, [‘iam’, ‘listGroups’, region]);Then, we can loop through each of the results and do:var getGroup = helpers.addSource(cache, source, [‘iam’, ‘getGroup’, region, group.GroupName]);The helpers function ensures that the proper results are returned from the collection and that they are saved into a "source" variable which can be returned with the results.Now, we can write the plugin functionality by checking for the data relevant to our requirements:if (!getGroup || getGroup.err || !getGroup.data || !getGroup.data.Users) { helpers.addResult(results, 3, ‘Unable to query for group: ‘ + group.GroupName, ‘global’, group.Arn);} else if (!getGroup.data.Users.length) { helpers.addResult(results, 1, ‘Group: ‘ + group.GroupName + ‘ does not contain any users’, ‘global’, group.Arn); return cb();} else { helpers.addResult(results, 0, ‘Group: ‘ + group.GroupName + ‘ contains ‘ + getGroup.data.Users.length + ‘ user(s)’, ‘global’, group.Arn);}The addResult function ensures we are adding the results to the results array in the proper format. This function accepts the following:(results array, score, message, region, resource)The resource is optional, and the score must be between 0 and 3 to indicate PASS, WARN, FAIL, or UNKNOWN.Download CloudSploit Scans

Link: http://feedproxy.google.com/~r/PentestTools/~3/kO89DoOlQUw/cloudsploit-scans-aws-security-scanning.html

Aws_Public_Ips – Fetch All Public IP Addresses Tied To Your AWS Account

aws_public_ips is a tool to fetch all public IP addresses (both IPv4/IPv6) associated with an AWS account.It can be used as a library and as a CLI, and supports the following AWS services (all with both Classic & VPC flavors):APIGatewayCloudFrontEC2 (and as a result: ECS, EKS, Beanstalk, Fargate, Batch, & NAT Instances)ElasticSearchELB (Classic ELB)ELBv2 (ALB/NLB)LightsailRDSRedshiftIf a service isn’t listed (S3, ElastiCache, etc) it’s most likely because it doesn’t have anything to support (i.e. it might not be deployable publicly, it might have all ip addresses resolve to global AWS infrastructure, etc).Quick startInstall the gem and run it:$ gem install aws_public_ips# Uses default ~/.aws/credentials$ aws_public_ips52.84.11.1352.84.11.832600:9000:2039:ba00:1a:cd27:1440:93a12600:9000:2039:6e00:1a:cd27:1440:93a1# With a custom profile$ AWS_PROFILE=production aws_public_ips52.84.11.159CLI reference$ aws_public_ips –helpUsage: aws_public_ips [options] -s, –services ,<s2>,<s3> List of AWS services to check. Available services: apigateway,cloudfront,ec2,elasticsearch,elb,elbv2,lightsail,rds,redshift. Defaults to all. -f, –format <format> Set output format. Available formats: json,prettyjson,text. Defaults to text. -v, –[no-]verbose Enable debug/trace output –version Print version -h, –help Show this help messageConfigurationFor authentication aws_public_ips uses the default aws-sdk-ruby configuration, meaning that the following are checked in order:Environment variables:AWS_ACCESS_KEY_IDAWS_SECRET_ACCESS_KEYAWS_REGIONAWS_PROFILEShared credentials files:~/.aws/credentials~/.aws/configInstance profile via metadata endpoint (if running on EC2, ECS, EKS, or Fargate)For more information see the AWS SDK documentation on configuration.IAM permissionsTo find the public IPs from all AWS services, the minimal policy needed by your IAM user is:{ “Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "apigateway:GET", "cloudfront:ListDistributions", "ec2:DescribeInstances", "elasticloadbalancing:DescribeLoadBalancers", "lightsail:GetInstances", "lightsail:GetLoadBalancers", "rds:DescribeDBInstances", "redshift:DescribeClusters" ], "Resource": "*" } ]}ContactFeel free to tweet or direct message: @arkadiytDownload Aws_Public_Ips

Link: http://feedproxy.google.com/~r/PentestTools/~3/aLYdLNP_wx4/awspublicips-fetch-all-public-ip.html