We are currently in the process of migrating to a new infrastructure, at the moment this is just for the marlowe playground but the intention is to move the plutus playground soon after. The new infrastructure aims to do the following:
- Remove the need for managing servers
- Make setup and configuration easy by using nix to generate scripts to do everything
- Make scaling easier and quicker
- Cut costs
A website is served from AWS API Gateway which will proxy to the following parts:
- static data is stored in S3
- anything that can be run in a lambda is
- other things (currently web-ghc) are run elsewhere (currently on the old server infrastructure)
If you are using OSX then you cannot build the lambdas locally, therefore if you want to update the infrastructure you will need to build the lambdas on a remote builder with system type "x86_64-linux". You can do this by adding such a build machine to your /etc/nix/machines
file, nix will try to use this machine to build the lambdas.
The scripts produce files for use with nixops (until we get rid of the legacy infra) and so you should provide the location where you want these files to go by setting another terraform variable, e.g. export TF_VAR_nixops_root=$(pwd)/deployment/nixops
.
The infrastructure is based around multiple environments, for example alpha
, david
etc. Scripts exist for updating a particular environment under the deployment
attribute, e.g. the main deployment script for the environment david
can be run with $(nix-build -A deployment.david.deploy)
. This will run other scripts that will do everything needed. These other scripts can be run individually, which can be useful if you are playing around with the infrastructure.
deployment.env.applyTerraform
will run only the terraform apply commanddeployment.env.syncS3
will sync the marlowe client, marlowe tutorial and plutus client static code with S3deployment.env.syncPlutusTutorial
will sync the plutus tutorial static code with S3, this is separate as it is 170Mb and so can take a long timedeployment.env.terraform-locals
will producegenerated.tf.json
which contains locals such asenv
deployment.env.terraform-vars
will produceenv.tfvars
which contains variables such assymbolic_lambda_file
if you are not on OSX
Once you have setup an environment with $(nix-build -A deployment.david.deploy)
you will probably want to stick to using $(nix-build -A deployment.david.applyTerraform)
and $(nix-build -A deployment.david.syncS3)
only, avoiding dealing with the large plutus tutorial.
The scripts require some secrets which are stored encrypted in this repository. To access them you will need to provide your gpg public key to someone who already has access to the secrets.
If you have not setup AWS authentication but you have enabled MFA then you can run eval $($(nix-build -A deployment.getCreds) <user.name> 123456)
(where 123456 is the current MFA code) before you run any other command to setup temporary credentials that are valid for 24 hours. Notice that you use $()
to evaluate the result of the nix build (which is a shell script) and then you use eval $()
around that result to evaluate the output of the script.
Yubikeys don't work seamlessly with awscli
, but they do work. To set them up:
-
Log into the the AWS console and navigate to the "My Security Credentials" page.
-
Add your Yubikey as a "Virtual MFA device".
Note: AWS offers special support for U2F security keys like Yubikeys. Don't choose that option. It works for the web login, won't work with
awscli
. If you already added your Yubikey as a "U2F security key", remove it and start again. -
The webpage will prompt you for a QR code. Instead, click the "Show secret key" link below that prompt.
-
Copy that secret key, and from your command line call:
ykman oath add -t <LABEL> <SECRET_KEY>
(
ykman
is provided by the Plutusshell.nix
, so it should already be available on the command line.)
You're now set up to use your Yubikey as passcode-generation device for awscli
.
For more details see this guide.
To generate a code, insert your Yubikey and type:
ykman oath code <LABEL>
It will prompt you to tap the key, and then print a One Time Passcode (OPT). You then use that code (as detailed above) with:
eval $($(nix-build -A deployment.getCreds) <user.name> <CODE>)
The legacy infrastructure is comprised of 2 parts, terraform and nixops:
We use terraform
to manage the AWS infrastructure including networking, loadbalancing and machines.
- You must have an account in the plutus-playground or dev-mantis AWS account (you will need a lot of capabilities, so an admin account is easiest)
- Authenticate your account in the current shell session
- Create the Route 53 zone you want to use (e.g. playground.plutus.iohkdev.io) and add an NS record in the parent zone.
- Setup ACM for wildcard on that zone.
- Move into the
deployment/terraform
directory - Initialize terraform with
terraform init
- Optionally, if you need to manage multiple workspaces, create a new terraform workspace with
terraform workspace new myname
- In
variables.tf
make sure that your ssh key is in the variablessh_keys
under the entrymyname
. You then need to add and entry in each of the*_ssh_keys
variables withmyname = ["myname"]
. Then key is the environment name and the value is a list of people who can have ssh access to those machines. - Copy
terraform.tfvars.example
toterraform.tfvars
or a custom tfvars file if you want to pass thevar-file
on the command line. - Edit
myname.tfvars
orterraform.tfvars
, changing myname and home directories etc. - Set tld in
tfvars
file to your zone - Check what changes terraform will make with
terraform plan -var-file=myname.tfvars
- If you are happy with all changes run
terraform apply -var-file=myname.tfvars
- You should see a new file
/home/myname/.ssh/config.d/plutus_playground.conf
- Add the line
Include config.d/*.conf
to the top of your/home/myname/.ssh/config
file. This will make it easier to ssh to the machines
You should now have a complete infrastructure however not much is installed on the machines. You can see the available machines and their addresses with cat ~/.ssh/config.d/plutus_playground
. You can ssh to the machines as root
in an emergency, but this should never ever be done unless the machine is completely unreachable from
nixops host. You should always ssh using the nixops ssh <host>
command on nixops host instead of directly logging in as root over ssh.
The key for API Gateway (apiGatewayKey
in the secrets.json
file mentioned in next section) can be found in the AWS console in the API Gateway section, API Keys (left menu), then select the API Key in the list, and then click the Show
hyperlink in the API key
field on the right hand side.
It seems currently the API Gateway end-point is not deployed automatically. It can also be deployed from the console by going to API Gateway, click in the API to deploy, and in the resources section resources click actions, and then Deploy API in the drop down menu (related stack overflow question).
The individual machines now exist but have nothing installed on them. We configure these machines and install services using nixops.
- ssh onto the nixops machine
ssh nixops.plutus_playground
and accept the fingerprints - Clone the plutus repository
git clone https://github.com/input-output-hk/plutus.git
- exit the machine and from the project root copy the generated json files onto the nixops machine
scp ./deployment/nixops/*.json root@nixops.plutus_playground:~/plutus/deployment/nixops
- ssh onto the nixops machine again
ssh -A nixops.plutus_playground
(notice-A
you will need agent forwarding) - Enter the project
cd plutus
- Switch to the branch you want to work with e.g.
git checkout master
- Move into the nixops directory
cd deployment/nixops/
- Create a file called
secrets.json
that is based on the example file. - Create a new deployment
nixops create ./default.nix ./network.nix -d playgrounds
- Deploy the new deployment
nixops deploy
- You should now be able to reach the plutus playground at [https://myname.plutus.iohkdev.io] (https://myname.plutus.iohkdev.io) and marlowe playground at [https://myname.marlowe.iohkdev.io] (https://myname.marlowe.iohkdev.io)
Most of the time, an environment can be updated without touching terraform at all.
- ssh onto the nixops machine again
ssh -A nixops.plutus_playground
- update plutus with
cd plutus && git pull
- deploy the latest with
nixops deploy
In the case that terraform code is altered in a way that re-created the nixops machine, you will need to go through the entire Configure the machines
section above. If the nixops machine is not altered, you will be able to copy machine.json
and just nixops deploy
after applying terraform code.
WARNING: altering some ssh keys in terraform instances can result in machines being recreated. Ensure with others using machines that it's okay to bring down everything before running any terraform commands. Also a close inspection of terraform plan
can help assess the danger of running terraform apply
. Usually you don't want to change these keys anyway as user keys are managed by nixops. As an example, changing var.nixops_ssh_keys
will result in the nixops machine being re-created however changing var.playground_ssh_keys
will only change the machines.json
file that nixops uses.
If you wish to use the continuous delivery deployment server then please read the Readme.
Sometimes it is necessary to change the user_data
field in an EC2 machine, for example if you want to upgrade nixpkgs on the machine definition in deployment.nixos
then you should ensure user_data
is also changed. This ensures that if the machine is ever re-created (or when a new environment is created) the correct initial nixos configuration is used.
When user_data
is modified, terraform will see there is a difference and ask to re-create the machine, this is often undesirable and you can work around it as follows:
- add something like the following to the bottom of
main.tf
where the correctuser_data
is used:
output "user_data" {
value = "${data.template_file.nixops_user_data.rendered}"
}
- run
terraform refresh -var-file=myvars.tf
- go to the AWS console -> EC2 -> instances and find the instance(s) with the user data you want to change
- stop the machine
- change the user data (Instance Settings -> View/Change User Data)
- start the machine
- run
terraform apply -var-file=myvars.tf
If terraform still thinks it needs to make a change to user_data
it's probably because there is a missing or extra newline in the user data. You can fiddle with this by putting the user data in a file and adjust and run cat userdata | shasum
until you get the same sha that terraform is expecting.
Finally you should delete the output
you created in main.tf
as it creates noise in the output.
- Go to the AWS Certificate Manager and make sure you select the region which you wish to add certificates to.
- If there are no certificates then click on provision a new certificate, otherwise request a certificate. Start the wizard and Request a public certificate.
- The domain name should be
*.marlowe.iohkdev.io
. - Select DNS validation.
- No tags needed.
- Review your choices and click on Confirm and Request.
- Now you need to setup DNS validation. On the Validation screen, expand the
*.marlowe.iohkdev.io
domain and click on Create record in Route 53. You can then Continue and after a few seconds or minutes your certificate should have status “Issued”. - Repeat for the other 2 domains,
*.plutus.iohkdev.io
and*.goguen.monitoring.iohkdev.io
.