Plutus and Marlowe have various web applications, currently there are:
- Plutus Playground
- Marlowe Playground
- Marlowe Dashboard
These are run on NixOS machines in AWS and in addition connect to some AWS Lambdas with some static content served from AWS S3 buckets. Deployment to this infrastructure is therefore done in 2 parts:
- Deploying the infrastructure, lambdas and S3 content using terraform
- Deploying the machine configuration using morph
Any machine (including OSX) can be used for deployment as long as it has the following:
- AWS admin credentials for the AWS account we use
- nix installed and working
- gpg installed and working
- if you are not on Linux then access to a Linux remote builder for nix
If you are using OSX then you cannot build the lambdas and NixOS machines, therefore if you want to update the infrastructure you will need to build the lambdas on a remote builder with system type "x86_64-linux". See the nix remote builders documentation for more details about setting up a remote builder. nix on docker could be useful for this.
You then need to add your remote build machine to your /etc/nix/machines
file, then nix will try to use this machine to build the lambdas and NixOS closures. See this guide for more information.
We use pass to store secrets in the repository, given this you will need to setup your gpg key.
- Add your key to ./keys/my.name.gpg
- Add your name, key filename and key id to the default.nix
keys
attribute set - Run
$(nix-build -A deployment.importKeys)
to make sure you have everyone else's keys - Add your key name to any environment you want to be able to deploy in default.nix
envs
- Once you've added your key you will need to get someone else who already has access to enable you. To do this commit your changes to a branch and ask this person to checkout the branch, run
$(nix-build -A deployment.the_env_you_want.initPass)
and commit the changes this will have made.
If you have not setup AWS authentication but you have enabled MFA then you can run eval $(getcreds <user.name> 123456)
(where 123456 is the One Time Passcode (OPT)) before you run any other command to setup temporary credentials that are valid for 24 hours. Notice that you use $()
to evaluate the result of the shell script and then you use eval
on that result to evaluate the output of the script (this sets some environmental variables).
Yubikeys don't work seamlessly with awscli
, but they do work. To set them up:
-
Log into the the AWS console and navigate to the "My Security Credentials" page.
-
Add your Yubikey as a "Virtual MFA device".
Note: AWS offers special support for U2F security keys like Yubikeys. Don't choose that option. It works for the web login, won't work with
awscli
. If you already added your Yubikey as a "U2F security key", remove it and start again. -
The webpage will prompt you for a QR code. Instead, click the "Show secret key" link below that prompt.
-
Copy that secret key, and from your command line call:
ykman oath add -t <LABEL> <SECRET_KEY>
(
ykman
is provided by the Plutusshell.nix
, so it should already be available on the command line.)
You're now set up to use your Yubikey as passcode-generation device for awscli
.
For more details see this guide.
To generate a code, insert your Yubikey and type:
ykman oath code <LABEL>
It will prompt you to tap the key, and then print a One Time Passcode (OPT). You then use that code (as detailed above) with:
eval $(getcreds <user.name> <CODE>)
The infrastructure is based around multiple environments, for example alpha
, david
etc. Scripts exist for updating a particular environment under the deployment
attribute, e.g. the main deployment script for the environment david
can be run with $(nix-build -A deployment.david.deploy)
. This will run other scripts that will do everything needed. These other scripts can be run individually, which can be useful if you are playing around with the infrastructure.
deployment.env.applyTerraform
will run only the terraform apply commanddeployment.env.syncS3
will sync the marlowe client, marlowe tutorial and plutus client static code with S3deployment.env.syncPlutusTutorial
will sync the plutus tutorial static code with S3, this is separate as it is 170Mb and so can take a long timedeployment.env.terraform-locals
will producegenerated.tf.json
which contains locals such asenv
deployment.env.terraform-vars
will produceenv.tfvars
which contains variables such assymbolic_lambda_file
if you are not on OSXdeployment.env.refreshTerraform
will run only the terraform refresh command
Note: terraform is run from a clean, temporary directory every time you make changes so it will always need to re-create some files, even if no infrastructure changes are required. However, don't get lazy and not read through the proposed changes before pressing yes!
Once you have setup an environment with $(nix-build -A deployment.david.deploy)
you will probably want to stick to using $(nix-build -A deployment.david.applyTerraform)
and $(nix-build -A deployment.david.syncS3)
only, avoiding dealing with the large plutus tutorial.
Running the terraform scripts will place an ssh config file in your ~/.ssh/config.d directory. This will give you easy ssh access to the servers by setting the jump hosts, usernames, dns names etc but in order for it to work you must include it in your main ssh config. Open or create the file ~/.ssh/config and add the following line at the top Include config.d/plutus_playground.conf
. You can then test the config by running ssh prometheus.plutus_playground
which should open an ssh on the prometheus machine.
This configuration is also vital for Morph to work as it assumes this ssh config so you must get this working before carrying on.
Once you have run the terraform scripts you will have an up-to-date environment with EC2 instances running, however these instances won't have the required NixOS configuration yet. In order to configure them we use morph, there is just one command for normal use: morph deploy ./deployment/morph/default.nix switch
.
It is important to note that this is somewhat stateful in that you must run the terraform scripts beforehand to make sure that both the ssh configuration and the machine definitions (machines.json generated by terraform) are correct. Otherwise morph could try to deploy to incorrect EC2 instances. Be especially careful when switching between multiple different environments!
Now that things are up and running you should be able to get some basic feedback by looking at https://.goguen.monitoring.iohkdev.io/targets (where env
is the environment you deployed to).
Sometimes it is necessary to change the user_data
field in an EC2 machine, for example if you want to upgrade nixpkgs on the machine definition in then you should ensure user_data
is also changed. This ensures that if the machine is ever re-created (or when a new environment is created) the correct initial nixos configuration is used.
When user_data
is modified, terraform will see there is a difference and ask to re-create the machine, this is often undesirable and you can work around it as follows:
- add something like the following to the bottom of
output.tf
where the correctuser_data
is used:
output "prometheus_user_data" {
value = "${data.template_file.prometheus_user_data.rendered}"
}
- run
$(nix-build -A deployment.refreshTerraform)
, the user data should be displayed as part of the terraform output in stdout - go to the AWS console -> EC2 -> instances and find the instance(s) with the user data you want to change
- stop the machine
- change the user data (Instance Settings -> View/Change User Data)
- start the machine
- run
$(nix-build -A deployment.applyTerraform)
If terraform still thinks it needs to make a change to user_data
it's probably because there is a missing or extra newline in the user data. You can fiddle with this by putting the user data in a file and adjust and run cat userdata | shasum
until you get the same sha that terraform is expecting.
Finally you should delete/comment the output
you created in output.tf
as it creates noise in the output.