Skip to content

Releases: cloudamatic/mu

The Goog

10 Apr 13:40
ac7dbab
Compare
Choose a tag to compare

Google Cloud layer

We can now deploy VPCs, FirewallRules, Servers, ServerPools, and LoadBalancers[1] into Google Cloud Project. The following is a legal Basket of Kittens for deploying a VM in GCP:

---
  appname: foo
  servers:
  - name: simple
    cloud: Google
    platform: centos7
    size: g1-small

Mu Masters can reside in either of AWS or GCP, and can target either platform if appropriate credentials are available. Masters in GCP will transparently use the local machine's service account credentials to deploy resources, if no service account is explicitly provided. The default cloud for a given Mu Master will "whichever cloud this Master lives in." To create GCP resources from an AWS Master or vice versa, you will need to use the cloud parameter as in the above snippet.

API coverage for supported resource types is largely complete. We have attempted to keep the BoK language as transparently portable as possible, such that most well-written BoKs should Just Work on either platform[2]. Some options, however are simply GCP or AWS-specific, and the parser will identify them as such. LoadBalancers in particular have very different functionality in GCP.

To configure GCP support, use mu-configure to provide account credentials and a default target project (GCP accounts are subdivided into Projects, a useful privilege boundary).

12) Google Cloud Platform
  12a. Default Project - linear-theater-184819
  12b. Credentials Vault:Item - secrets:google
  12c. Default Region - us-east4
  12d. Log and Secret Bucket Name - stange-egtlabs-mu-dev

To create service account credentials, visit the GCP console at https://console.cloud.google.com/iam-admin/serviceaccounts/project and create a Service Account with Project Owner rights. The console will allow you to export credentials as a JSON file, which you will then import into a vault[3] so that Mu can use them (see the Credentials Vault:Item above). To import the JSON file to the vault as Mu expects to see it, use knife vault create secrets google -J my-google-service-account.json.

MU::Config refactoring

The last of our truly ugly modules, MU::Config has received a significant internal facelift. Much redundant code has been removed or condensed, and the ordering of validation has been altered. This should mostly be of interest for Mu developers.

Most significantly, schema and validation that is specific to a given cloud layer has been shifted out of MU::Config and into class methods in the appropriate cloud layer resource implementation. For example, LoadBalancers in GCP support an option called named_ports, which is one of the many possible targets for health checks and forwarding rules. Since this is cloud-specific, it is defined in MU::Cloud::Google::LoadBalancer.

       def self.schema(config)
          toplevel_required = []
          schema = {
            "named_ports" => {
              "type" => "array",
              "items" => {
                "type" => "object",
                "required" => ["name", "port"],
                "additionalProperties" => false,
                "description" => "A named network port for a Google instance group, used for health checks and forwarding targets.",
                "properties" => {
                  "name" => {
                    "type" => "string"
                  },
                  "port" => {
                    "type" => "integer"
                  }
                }
              }
            }
          }
          [toplevel_required, schema]
        end

The self.schema method is a new required class method for all cloud resource implementations. It has a sibling, self.validateConfig, which is automatically called by the configuration parser to inspect for cloud-specific configuration errors, e.g. the following in MU::Cloud::AWS::LoadBalancer.validateConfig:

...
          if !lb["classic"]
            if lb["vpc"].nil?
              MU.log "LoadBalancer #{lb['name']} has no VPC configured. Either set 'classic' to true or configure a VPC.", MU::ERR
              ok = false
            end
...

These are significant structural changes with a lot of breakage potential, and will require extensive testing before this branch is considered release-ready.

There is much of our standard generic config schema that is, probably, AWS-specific, and should be migrated into AWS modules over time. This is not of immediate urgency, however.

BoK params exposed to Chef

If your Basket of Kittens has any parameters in it, these and their values will be exposed to Chef recipes on your nodes in node['mu_params']. Handy.

Developer FAQ for cloud resource implementations

There is now a README describing the requirements for implementing a new cloud resource.

us_only BoK flag for many resources

Restrict regions for target resources to US-only, valuable for government tasks. This is primarily of use in Google Cloud, where many resources can span regions globally (e.g. VPCs). The flag is supported in the AWS layer as well, in case this is of use to cloud layer developers.

AWS GovCloud support

Idiosyncratic behaviors related to AWS GovCloud regions should now be smoothed over. Behavior should be largely be similar to the commercial regions, though many services are not supported by Amazon in GovCloud (notably Route53, aka DNSZones).

mu-upload-chef-artifacts no longer deletes things by default

You don't have to run mu-upload-chef-artifacts with -n anymore to keep it from wasting five mintus purging the Chef server before (re)uploading everything. Default behavior now skips this purging step. If you would like it to purge things, use the -p option.

CAPVolume Chef library removed

Deprecated for some time, this has been fully removed from our Chef ecosystem. The mu_tools_disk resource should be used, if it is necessary to create volumes via Chef recipes.

Mu Masters no longer create their own log volumes via Chef, but rather through the mu-aws-setup or mu-gcp-setup utility, whichever is applicable. They in turn call the MU::Master.disk utility method, which retains the old functionality for encrypted volumes with luks. Right now this is only used for /Mu_Logs.

AWS AutoScale Scheduled Actions

AutoScale can now adjust the min_size, max_size, and/or desired_capacity on a one-off basis or a regular, cron-like schedule. This is now supported in BoK language: https://cloudamatic.gitlab.io/mu/MU/Config/BasketofKittens/server_pools/schedule.html

  schedule:
  - action_name: scale-down-over-night
    recurrence: "0 23 * * *"
    min_size: 1
    max_size: 1
  - action_name: scale-up-during-the-day
    recurrence: "0 10 * * *"
    min_size: 3
    max_size: 3

This can be added to existing deployments with -u and the experimental -l flag in combination. Example:

$ mu-deploy mystack.yaml -u MYSTACK-DEV-2018031216-KU -l
<...stuff...>
Mar 16 16:35:58 - clouds/aws/server_pool - Adding scheduled action to AutoScale group MYSTACK-DEV-2018031216-KU-SVR
{:auto_scaling_group_name=>"MYSTACK-DEV-2018031216-KU-SVR",
 :scheduled_action_name=>"scale-down-over-night",
 :max_size=>1,
 :min_size=>1,
 :recurrence=>"0 20 * * *"}
Mar 16 16:35:58 - clouds/aws/server_pool - Adding scheduled action to AutoScale group MYSTACK-DEV-2018031216-KU-SVR
{:auto_scaling_group_name=>"MYSTACK-DEV-2018031216-KU-SVR",
 :scheduled_action_name=>"scale-up-during-the-day",
 :max_size=>2,
 :min_size=>2,
 :recurrence=>"0 7 * * *"}

mu-node-manage -m chefupgrade

Nodes in long-standing deployments often lag far behind on Chef Client versions, to a point where some community cookbooks cease working. This new mu-node-manage subcommand will jump in and replace the installed client with Mu's current default client version.

Chef Server and Client updates

Chef Client default: 12.21.31-1
Chef Server default: 12.17.15-1

As always, Chef Server upgrades are known to be dodgy (Opscode failsauce, usually around rabbitmq). Approach with caution. Recommended procedure:

  1. If the machine contains production deployments or other important data, image it beforehand for the sake of recoverability
  2. mu-self-update -b the_goog
  3. The Chef Server upgrade will probably fail outright or hang indeterminately. If it does not, you're done, and can skip the rest of these steps.
  4. chef-server-ctl stop && rpm -e chef-server-core && reboot
  5. After a reboot, run service iptables stop, because sometimes a background Chef run will get confused and block a bunch of ports on you.
  6. chef-apply /opt/mu/lib/cookbooks/mu-master/recipes/init.rb to reinstall the Chef Server package and configure it appropriately. Your pre-existing Chef data should remain and not require restoration.
  7. Run mu-self-update -b the_goog again to complete the upgrade

Misc Bugfixes

  • Myriad RHEL7-family 389ds fixes. Workarounds for weird behavior from the package's installer and for missing SELinux permissions.
  • Further low-key syntax updates to cookbook ecosystem in preparation for Chef 13/14 compatibility
  • Display banners correctly and set hostname correctly on RHEL7-family nodes and masters.
  • Mysterious mis-setting of the nodename field in server deployment metadata may be fixed; existing deploy...
Read more

WinRM? More Like "rm windows"

08 Dec 16:48
d3570e4
Compare
Choose a tag to compare

WinRM

All bootstrap communication with Windows nodes now takes place using WinRM instead of Cygwin sshd. We use the https listener, and certificate-based authentication. Cygwin is no longer set up via userdata (but is still in play, see below).

Re-grooming of Windows nodes, whether with mu-node-manage or MommaCat call-ins, will attempt WinRM first. If it doesn't work, it will also mix in attempts over ssh.

Internal SSL cert enhancements

Partly to support WinRM functionality. We're now using some of the v3 extensions, adding Subject Alternative Names and the like. Conveniently, this seems to fix some new SSL issues that have cropped up on older branches, which were probably(?) triggered by new OpenSSL releases.

mu-node-manage -m certs will invoke the cert-generation method to create node certificates that don't yet exist, which is mostly for adding -winrm authentication certs for existing Windows nodes. It will also regenerate node certificates that have expired.

The disposition of Cygwin

We haven't gotten rid of it, just taken it out of the critical path for bootstrapping of new nodes. It's still being installed and configured as an alternate node access method.

mu-node-manage will attempt to use WinRM first when connecting to Windows nodes. If that fails, it will attempt ssh. This has the side benefit of maintaining backwards-compatibility for existing Windows nodes that were not bootstrapped with WinRM.

We looked at other ssh implementations for Windows, but the ones available for 2012r2 still seem to be sketchy. This is something to revisit with things from the Windows 10/2016 ecosystem, which may have more robust native support.

knife-windows

It didn't actually support certificate authentication with WinRM, but the underlying gem is the same as we're using elsewhere. It was a trivial port, so I added the functionality and submitted a pull request. For the time being our bundle pulls the gem from our fork of the main repo.

Fork: https://github.com/eGT-Labs/knife-windows
Pull request: chef/knife-windows#438

I don't know how to properly massage the Chef peoples' internal development processes, so who knows whether they'll ever merge it. I included some helpful documentation on how we're doing the certificate magic as bait.

Known issues

Building Windows nodes continues to be unnecessarily difficult, even with Cygwin out of the way. I've done yet another round of tightening around Windows' random bootstrap idiosyncrasies. Maybe we've accounted for all of the weird edge cases, maybe not.

WinRM connections between Mu and Windows nodes aren't actually verifying (as in SSL) on connect. This appears to be an issue with the WinRM gem- even if we specify the appropriate trusted signing cert, the connection still blows up on verify, so this is the only available workaround. It's a low security risk in a controlled environment, but still a target for later correction. It impacts Mu's library connections over WinRM as well as Chef's via knife-windows, which uses the same gem.

We don't have a good mechanism for cleaning defunct certificates out of Windows' certificate stores, e.g. WinRM client certificates inherited from source machine images. This should probably be some convoluted Powershell that gets added to userdata.

It is currently not possible to invoke the Cygwin installer/package manager from Chef, so the initial installation is happening during some pre-Chef magic buried in Mu::Cloud. We do the followup, e.g. enabling LSA and configuring sshd, in mu-tools::windows-client. The installer issue is either Opscode's bug or Cygwin's. If it ever gets fixed, we should shift the remaining bits of our Cygwin installation into mu-tools::windows-client, and consider incorporating the cygwin community cookbook to do real package maintenance.

The AWS layer's "build an AMI" code uses SSH to log in and clean up the node it's about to image. That still works, but to be canonically correct for Windows it should probably do that over WinRM. More importantly, this logic should probably get factored away from the cloud-specific implementation.

Chef's user resource seems to have stopped being able to set passwords on Windows. I added a workaround in the windows_users resource in mu-tools to deal with it until they fix their bug. Sprinkle elsewhere as needed.

(([adsi]('WinNT://./#{usr}, user')).psbase.invoke('SetPassword', '#{pass}'))

Upgrade Notes

Chef server and client versions are bumped with this release. Chef Server continues to have upgrade issues that are not of our making. If you encounter a hang or other problem with a chef-server-ctl operation during mu-self-update, I suggest the following steps:

  1. rpm -e chef-server-core && rm -rf /opt/opscode
  2. Reboot the machine, as ludicrous as that sounds (Chef Server often fails to stop its own daemons)
  3. chef-apply /opt/mu/lib/cookbooks/mu-master/recipes/init.rb to reinstall Chef Server cleanly.
  4. Re-run your mu-self-update

It's All Your Vault

04 Aug 17:25
Compare
Choose a tag to compare

Original PR: #66
Known Migration Hiccups

Hashicorp Consul and Vault

These are now bundled as as part of your Mu Master's service suite. The original idea was to use this as a groomer-agnostic replacement for Chef Vault. However, it's entirely too complicated for automata to use in the way we currently do. In general seems intended to be used like a backup safe for extremely sensitive data. That's still useful, as is bundling Consul, which supports other useful Hashicorp products, so we're rolling with it.

See the recipe mu-master::vault, which relies on some of Hashicorp's community cookbooks. Note that both Consul and Vault support clustering, so multi-node configurations are theoretically possible.

More on using Hashicorp Vault

Rewrite of mu-configure and installation process

The important work for this branch. The new toolchain is best illustrated by stepping through the build process for new master:

  1. The administrator downloads and executes install/installer from this repo. This is now a very simple stub, which installs a current release of Chef Client, and then...
  2. Runs chef-apply against the standalone recipe mu-master::init. When doing a fresh install it does so by fetching it straight from this repo. This recipe installs Chef Server, clones the cloudamatic/mu Github repository, installs our custom Ruby package, handles the installation of Gem bundles, and other tasks in order to get a minimally functional Mu tooling environment working.
  3. Once that recipe completes and we have a minimally functional Chef Server and set of Mu tools, it hands off to mu-configure. As in the past it prompts the administrator for sundry required configuration parameters, though these can also be provided as arguments for unattended installs. Once specified, it applies the configuration, installs all of the other software required for a fully-functioning master, and creates the mu Chef organization and user, associated with the root account.

mu-configure has been completely rewritten as a Ruby script. As before, it has a menu-driven interface, but is capable of much more complicated logic and validation than before. Also, every menu option has a corresponding command-line switch (run mu-configure --help for details), so that installations or reconfigurations can be scripted. Our installer script will pass these arguments to mu-configure unaltered.

Certain software and services which were configured by hand-crafted Bourne shell code have now been moved into reasonably sane Chef recipes, notably mu-master::389ds and mu-master::ssl-certs. mu-master::init, used during the earliest installation phase, is also part of the regular run list, though on an already-bootstrapped node it mostly just enforces permissions and manages local Ruby installations.

Our crusty old mu_setup script has been renamed to deprecated-bash-library.sh so that it's clear what it is. Its only remaining use is as a library for mu-upload-chef-artifacts, which should be slated for a revamp in the future. We are also no longer dependent on shell environment variables such as MU_INSTALLDIR, though these continue to be honored in certain places. The old mu.rc configuration files are no longer needed or acknowledged; all configuration should be derived from mu.yaml (systemwide) and ~/.mu.yaml (individual non-root users).

An intentional corollary of this work is that it should now be possible to build this software into an AMI, scrubbed of certificates and other identifying information, and build new Mu Masters by simply spinning up said AMI and running mu-configure again. This should be much faster and more reliable than building from scratch.

mu-ssh

Handy little utility. It's just a wrapper around mu-node-manage that passes arguments as a search pattern, and then tries to ssh to the resulting nodes, each in turn. So I can do something like mu-ssh drupal and get a quick interactive shell on whatever node or nodes have that string in their name.

Internal SSL CA

Besides now originating from a Chef recipe, we also attempting to use SAN fields to build in all the names whereby our automated services might refer to our Mu master. This gets around the idiosyncrasies of several software packages, notably Chef's libraries and utilities.

Load Balancer improvements

We can now reference Amazon Certificate Manager certificates for SSL listeners, as well as the IAM certificates we've always supported. The Basket of Kittens syntax is unchanged, so it will only be necessary to alter BoKs if there is a name collision across multiple certificates.

We can also now set SSL listener policies with the tls_policy parameter. This only seems to work on Application Load Balancers (an AWS issue), for now. Our new default is to use only TLS1.2 and known-good encryption algorithms, which is standard industry practice.

mu_master_user Chef resource

Simple little resource interface for user management from Chef, for cookbooks which reference mu-master as a dependency. Only valid when the recipe is running on a Mu Master. You can do stuff like this:

mu_master_user "someuser" do
  realname "Some Guy"
  email "[email protected]"
end

Complete set of valid parameters for this are:

attribute :username, :kind_of => String, :name_attribute => true, :required => true
attribute :realname, :kind_of => String, :required => true
attribute :email, :kind_of => String, :required => true
attribute :password, :kind_of => String, :required => false
attribute :admin, :kind_of => Boolean, :required => false, :default => false
attribute :orgs, :kind_of => Array, :required => false
attribute :remove_orgs, :kind_of => Array, :required => false

Berkshelf

We have a new skeletal platform repository Berksfile, which is recommended for all projects. It will honor cookbook version constraints set in the global Mu Berksfile, so that we don't get those fun "surprise upgrades." See lib/extras/platform_berksfile_base.

Bug fixes, minor enhancements

  • Multiple issues with rsyslog centralization
  • Multiple issues with the mu-jenkins cookbook
  • User allocation could sometimes result in duplicate UIDs
  • Many, many workarounds for Opscode problems with Chef Server installations, upgrades, and restarts
  • A number of nuisance problems with mu-user-manage (or rather, its support libraries)
  • Peculiarities for CentOS7 and RHEL7
  • Less and less dependency on being run in Amazon Web Services. We're about an inch from being ok on Google Cloud Platform, bare metal, etc. Branch the_goog should carry this work the rest of the way.
  • Ironed over most of the Chef 13 deprecation warnings. No functional changes here, but less noise in Chef runs, which makes debugging easier.
  • Nagios alerts stick the local hostname in the subject and body of emails so you can tell what Mu Master they came from
  • Auto-subnetting now behaves correctly in accounts with 5+ AZs
  • mu-utility::nat may be fixed (only partially tested)
  • Minor behavioral cleanups for mu-upload-chef-artifacts
  • Retired some defunct community cookbooks
  • Added Momma Cat logs to logrotate, and enabled compression on rotated logs.
  • Workarounds for irritating edge case bugs in chef-vault
  • Retired some weak SSL ciphers from generic Apache, Nginx, and Splunk configs.


Known Migration Issues

As you update existing Mu masters to the new master branch, you may trip over something dumb like this:

.git/hooks/post-checkout: line 9: .git/hooks/../../install/deprecated-bash-library.sh: No such file or directory

It's safe to just nuke /opt/mu/lib/.git/hooks/* and then run mu-self-update -b master again.

Chef Server seems to have gotten incredibly fragile over time, even the latest release. An upgrade, reconfigure, or occasionally even a restart seems to implode uncomfortably often. This is ultimately an Opscode issue, not related to our work, but may impact anyone doing an update.

Sometimes you have to turn off iptables for a second to get rabbitmq to start, for example (whatever port it's trying to poke isn't pertinent once it's up, and it's not documented).

Another one I've seen but been unable to reproduce is Berkshelf uploads of edited cookbooks melting down with an internal Chef Server error that makes no sense. It's definitely not our fault, whatever it is.

There's not much we can do about that that we're not already doing. Only @rpattcorner has masters that need updating, so if a mu-self-update -b master on those guys faceplants, just call me in to massage them instead of worrying about adding more dopey workarounds.


Hashicorp Vault basics

Meanwhile, from my fact-finding mission with Hashicorp Vault on our customer's behalf:

Usage of this thing is pretty complicated, by design. I think it’s best used for secrets that need to be ultra-protected, and written only infrequently and manually. If you were an SSL vendor, for example, you might stash your root CA key with something like this. When you need a piece of data to be managed with p...

Read more

Six Pack ALBs

22 Apr 11:35
Compare
Choose a tag to compare

Application load balancers with some backward compatability, set classic to true for classic ELBs
Major changes in Berkshelf integration
Ruby version bump

If issues with upgrading existing master, rerun mu-self-upgrade then installer if required.

Release notes here

Nested Cloudformation Merge

02 Sep 18:06
Compare
Choose a tag to compare
v1.2

address local db security group correctly

Premerge of knife_114 fallback

29 Jan 17:59
Compare
Choose a tag to compare
Pre-release
v1.1a

fixed bad error checking in Route53 DNS Zone discovery

Directory Services Release

09 Dec 21:13
Compare
Choose a tag to compare

/opt/mu/etc/mu.yaml

We've needed a more sophisticated config for masters for a while now, so here one is. This is intended to eventually replace mu.rc. For now they exist in parallel, pending @zr2d2's installer and utility work. The new LDAP functionality and mu-user-manage obey this file.

The first time one of them is invoked, it will generate a sample version based on your current mu.rc. Here's FEMA's, minus the LDAP section (more on that later):

---
  mu_admin_email: [email protected]
  jenkins_admin_email: [email protected]
  public_address: mu.femadata.com
  hostname: femadata-mu
  allow_invade_foreign_vpcs: true
  installdir: /opt/mu
  datadir: /opt/mu/var
  ssl:
    cert: /opt/mu/var/ssl/fema.crt
    key: /opt/mu/var/ssl/fema.key
    chain: /opt/mu/var/ssl/godaddy_chain.crt
  scratchpad:
    template_path: /opt/mu/var/fema_platform/utils/includes/scratchpad.html
    max_age: 604800
  mu_repo: cloudamatic/mu.git
  repos:
  - eGT-Labs/fema_platform.git
  aws:
    account_number: 129261966611
    region: us-east-1
    log_bucket_name: mu-logs-production

Scratchpad

We needed a native one-time secret mechanism for sharing passwords and the like, so I made one up. It simply sticks a string in a vault with a random 32-character item name. Hitting a particular URL (/scratchpad/itemname on MommaCat) will display the secret and delete the item.

image

You can call the creation routine stuff from Ruby with MU::Master.storeScratchPadSecret(text), which returns the item name. If you're in the mu-master cookbook you can call it from Chef too, which I wager we'll find useful. Ditto the other end of it with MU::Master.fetchScratchPadSecret(itemname), if you need to grab a secret directly.

Self-signed SSL certificates now behave better

We were doing it wrong, leading to cranky browsers and utilities. Updating an existing Mu server will regenerate your internal certificates with correct ones. That will impact your existing nodes, but a knife ssl fetch should calm them down.

Cleaner URLs

Scratchpad made one too many random things listening on weird ports. I gave Apache control of port 443 and set it up as a reverse proxy as follows:

Unified authentication through LDAP

Mu users are now stored in a directory service. By default it's a bundled setup of 389 DS (http://directory.fedoraproject.org/), which is basically OpenLDAP with some tooling and schemas built around it. The software supports replication, which I bet we're going to want someday.

Nagios, Jenkins, and system users (your non-root ssh logins) all use the directory for authentication now. Chef's web UI could do it too, if we installed it.

As a corollary we get a fairly robust library under MU::Master::LDAP that can speak LDAP to this or to Active Directory. I've already built a nice FEMA tool off of it it.

This is only intended to be used for Mu's internal services, but in theory there's nothing stopping from using it as a general directory and authentication mechanism for your infrastructure. I don't know that we should recommend that, though.

Big deal: We can also slave to an Active Directory infrastructure instead of our bundled directory, which is how we're rolling in FEMA.

The default configuration of the ldap stanza of mu.yaml is going to be right for nearly everyone. If you want to customize, e.g. to dangle yourself off of an existing AD tree, it looks a bit like this:

  ldap:
    type: Active Directory
    base_dn: OU=FEMAData,DC=ad,DC=femadata,DC=com
    user_ou: OU=Users,OU=FEMAData,DC=ad,DC=femadata,DC=com
    bind_creds:
      vault: active_directory
      item: mu_svc_acct
      username_field: dn
      password_field: password
    join_creds:
      vault: active_directory
      item: join_domain
      username_field: username
      password_field: password
    domain_name: ad.femadata.com
    domain_netbios_name: femadata
    user_group_dn: CN=Mu-Users,OU=Groups,OU=Management,OU=FEMAData,DC=ad,DC=femadata,DC=com
    user_group_name: mu-users
    admin_group_dn: CN=Mu,OU=Groups,OU=Management,OU=FEMAData,DC=ad,DC=femadata,DC=com
    admin_group_name: mu
    dcs:
    - dc1.ad.femadata.com
    - dc2.ad.femadata.com

...obviously, all of the AD resources referenced there must exist and be properly configured, as must the vaults that store the sensitive credentials.

Note that if your bind user doesn't have write privileges to the relevant bits of AD, you won't be able to do much with Mu's tools to manage users... but things will still work in a read-only fashion. The assumption is that you're doing it through AD's management tools. This is a legit use case.

When using the inbuilt 389 DS, we're using SSSD instead of Winbind to integrate with PAM and friends. This is apparently the new hotness. We may want to consider porting that to the Linux AD recipes someday.

@amirahav's new AWS functionality

CloudWatch/CloudWatch Logs

We can now create CloudWatch alarms. Alarms can be created within each resource that supports it (eg. Databases, Servers, Server Pools, Load Balancers) or also as first class citizens in the BoK language itself. This should give you a better picture of the status of various resources. For example you can now get notified when the number of healthy instances attached to a load balancer is below a certain number, or configure AWS to "recover" you instance when it fails the System Status Check

CloudTrail monitoring was also added to CloudWatch directly. Assuming CloudTrial is enabled in your account you can enable CloudWatch integration by setting enable_cloudtrail_logging: true in the BoK. See example at demo/cloudwatch-logs-cloudtrail.yaml

ElasticCache

You can now create cache clusters and can choose between memcached and redis. See example at demo/cache_cluster.yaml

The ElasticCache API is pretty awful. To make sure cleanup doesn't hold up other deployments/cleanups we are loading an existing deployment instead of searching for resources directly. This can create a situation where not all resources are deleted.

RDS

You can now create database clusters using the Aurora engine. We will manufacture database instances for you at the configuration level based on the number of nodes specified in the cluster_node_count attribute (this resembles how Read Replicas are generated) . See example at demo/db_cluster.yaml

Databases can now be created based on a point in time state of an existing database instance. Support for point in time creation style varies between database engine types

Password management now works the same way it does on other resources that use passwords - you can choose between a randomly generated password or a password that is stored in a vault. The option to store a clear text password in the node structure was also removed. You can retrieve the password by using a vault only. These means that both password and unecrypted_master_password that where deprecated in Internal Wrangler have been removed.

Other changes:

  • Tags are now propagated to snapshots. This is not exposes at the BoK language level
  • Cleanup - Will now search for other RDS resources without first looking for a database instance
  • flattening of some BoK configuration attributes the were added in Internal Wrangler
  • EC2-Classic support was mostly removed. Cleanup should still work, but deployment is not supported anymore
  • Deployment into the default VPC should work better now, however the database instance will be set to be publicly accessible regardless of what publicly_accessible is set to.

FlowLogs

You can now enable traffic logging on a VPC and/or subnets by setting enable_traffic_logging: true on the VPC or subnets. If you enable it at the VPC level it will apply to all subnets and network interfaces. If you only want to enable logging on specific subnets, set this at the subnet level.

AutoScaling

With the addition of ClouWatch you can now create fully functioning scaling policies. You can configure multiple ClouWatch alarms to point at the same scaling policy to help you scale based on different criteria (eg. CPU usage and/or network traffic)

Instead of using the simple scaling policy type which adds/removes a fixed number of instances, you can now use step scaling which will add/remove capacity based on the magnitude of the breach. See example at demo/autoscale_step_scaling.yaml

SNS

This is a very rudimentary implementation of the API, and only includes the ability to create notification topics to make is possible for CloudWatch to send notification by email.

Changes to DNS

DNS registration in server/server pool now allows you to toggle between registering private or public addresses by using the optional target_type parameter in the BoK. This, in conjunction with a couple of bugfixes allows you to use either CNAME or A records correctly.

You may also append the environment name (eg. dev, prod, etc...) to the DNS name. This is useful when using environment specific certificates and still wanting to leverage the automated DNS naming to help with finding nodes using Chef.

    dns_records:
      - name: vpn
    <% unless $environment == "prod" %>
        append_environment_name: true
    <% end %>
        override_existing: true
        type: CNAME
        ttl: 300
        zone:
          name: boundlessgeo.com
      - ttl: 300
    <% unless $environment == "prod" %>
        append_environment_name: true
    <% end %>
...
Read more

testdrive_2

05 Dec 13:05
Compare
Choose a tag to compare

Final merge of testdrive 2 into master from pull request 31
This version contains all testdrive modifications for the September 2015 ReInvent release

PreMerge Master for internal_wrangler

21 Aug 11:56
Compare
Choose a tag to compare
Pre-release

Master with minor fixes from RC1 before internal_wrangler merge

Mu tooling, Release Candidate 1

15 May 18:05
Compare
Choose a tag to compare

This is the first public release candidate for the mu platform and toolset