You can easily deploy an app: WordPress, LAMP, Node.js, Joomla, Magento, MEAN, Drupal, GitLab CE, Redmine, Nginx, Ghost, Django, PrestaShop, Plesk Hosting Stack, cPanel & WHM for AlmaLinux.
Or you can deploy a clean operating system: Amazon Linux, Ubuntu, Debian, FreeBSD, openSUSE, CentOS.
The Terraform script performs the following operations to set up an Amazon Lightsail instance and its related resources:
-
Creating an AWS Lightsail instance: The first resource block creates an Amazon Lightsail instance with defined parameters such as instance name, availability zone, key pair name, blueprint ID (specifying the OS of the instance), and bundle ID (specifying the plan of the instance). It also sets up an automatic snapshot addon that will take daily backups of the instance at a specified time. Lastly, it adds a tag of "Environment" to the instance.
-
Public ports definition on the AWS Lightsail instance: The second resource block specifies the public ports of the Lightsail instance created in step 1. This configuration opens TCP ports 22 (standard SSH port), 80 (standard HTTP port), and 443 (standard HTTPS port), allowing incoming connections on these ports. By opening port 22, you're also enabling secure shell (SSH) access to the instance, which is essential for administrative tasks.
-
Creating a key pair for AWS Lightsail instances: The third resource block creates a key pair in AWS Lightsail. Key pairs are used to log into Lightsail instances securely, and the name of the key pair is supplied via a variable.
-
Static IP address creation in AWS Lightsail: The fourth resource block creates a static IP address in AWS Lightsail. Static IPs are used to ensure that the IP address associated with your instance does not change if you stop and restart your instance.
-
Static IP address attachment to a Lightsail instance: The fifth resource block attaches the static IP address created in step 4 to the Lightsail instance created in step 1.
Install AWS CLI by following the guide.
Configure AWS CLI by following the guide.
Install Terraform by following the guide.
Install pre-commit by following the guide
Install tflint by following the guide
Install tfsec by following the guide
Install tfupdate by following the guide
When creating an instance in AWS Lightsail, two crucial parameters need to be specified: blueprint_id
and bundle_id
in the 00-variables.tf
.
The blueprint_id
parameter determines the operating system or application that will be installed on your instance. Essentially, it's the blueprint for your instance. For example, if you want to set up an instance running Ubuntu, the blueprint_id
would correspond to that. You can retrieve a list of all available blueprint IDs using the following command:
aws lightsail get-blueprints
On the other hand, the bundle_id
parameter specifies the plan for your Lightsail instance, which includes aspects such as RAM, CPU, storage, and data transfer allowance. It's akin to the instance type in other AWS services. To obtain a list of all available bundle IDs, use the following command:
aws lightsail get-bundles
In essence, the blueprint_id
and bundle_id
parameters together define the software and hardware configuration of your Lightsail instance.
.pre-commit-config.yaml
is useful for identifying simple issues before submission to code review. Pointing these issues out before code review, allows a code reviewer to focus on the architecture of a change while not wasting time with trivial style nitpicks. Make sure you have all tools from the requirements section installed for pre-commit hooks to work.
Make sure you have all tools from the requirements section installed.
You may change variables in the 00-variables.tf
to meet your requirements.
Initialize a working directory containing Terraform configuration files using the command:
terraform init
Run the pre-commit hooks to check for formatting and validation issues:
pre-commit run --all-files
Review the changes that Terraform plans to make to your infrastructure using the command:
terraform plan
Deploy using the command:
terraform apply -auto-approve
Once you've run terraform apply
and the resources are successfully created, a private key file will be generated in your project root directory (where your Terraform files are located). This key can be used to securely connect to the created Amazon Lightsail instance via SSH.
Here's an example of how to use the key to connect via SSH (replace myuser with your username and myinstance with your instance's public IP address or hostname):
ssh -i key-pair-wordpress-1.pem bitnami@instance-static-ip
The backend
block in the 01-providers.tf
must remain commented until the bucket and the DynamoDB table are created.
After all your resources will be created, you will need to replace empty values for region
and bucket
in the backend
block of the 01-providers.tf
since variables are not allowed in this block.
For region
you need to specify the region where the S3 bucket and DynamoDB table are located. You need to use the same value that you have in the 00-variables.tf
for the region
variable.
For bucket
you will get its values in the output after the first run of terraform apply -auto-approve
.
After your values are set, you can then uncomment the backend
block and run again terraform init
and then terraform apply -auto-approve
.
In this way, the terraform.tfstate
file will be stored in an S3 bucket and DynamoDB will be used for state locking and consistency checking.
.github
is useful if you are planning to run a pipeline on GitHub and implement the GitOps approach.
Remove the .example
part from the name of the files in .github/workflow
for the GitHub Actions pipeline to work.
Note, that you will need to add variables such as AWS_ACCESS_KEY_ID, AWS_DEFAULT_REGION, and AWS_SECRET_ACCESS_KEY in your GitHub projects CI/CD settings section to run your pipeline.
Therefore, you will need to create a service user in advance, using AWS Identity and Access Management (IAM) to get values for these variables and assign an access policy to the user to be able to operate with your resources.
You can delete .github
if you are not planning to use the GitHub pipeline.
- Terraform Unit Tests
This workflow executes a series of unit tests on the infrastructure code and is triggered by each commit. It begins by running terraform fmt to ensure proper code formatting and adherence to terraform best practices. Subsequently, it performs terraform validate to check for syntactical correctness and internal consistency of the code.
To further enhance the code quality and security, two additional tools, tfsec and tflint, are utilized:
tfsec: This step checks the code for potential security issues using tfsec, an open-source security scanner for Terraform. It helps identify any security vulnerabilities or misconfigurations in the infrastructure code.
tflint: This step employs tflint, a Terraform linting tool, to perform additional static code analysis and linting on the Terraform code. It helps detect potential issues and ensures adherence to best practices and coding standards specific to Terraform.
- Terraform Plan / Apply
This workflow runs on every pull request and on each commit to the main branch. The plan stage of the workflow is used to understand the impact of the IaC changes on the environment by running terraform plan. This report is then attached to the PR for easy review. The apply stage runs after the plan when the workflow is triggered by a push to the main branch. This stage will take the plan document and apply the changes after a manual review has signed off if there are any pending changes to the environment.
- Terraform Drift Detection
This workflow runs on a periodic basis to scan your environment for any configuration drift or changes made outside of terraform. If any drift is detected, a GitHub Issue is raised to alert the maintainers of the project.
If you have paid version of GitHub and you wish to have the approval process implemented, please refer to the provided guide to create an environment called production and uncomment this part in the 02-terraform-plan-apply.yml
:
on:
push:
branches:
- main
pull_request:
branches:
- main
And comment out this part in the 02-terraform-plan-apply.yml
:
on:
workflow_run:
workflows: [Terraform Unit Tests]
types:
- completed
Once the production environment is created, set up a protection rule and include any necessary approvers who must approve production deployments. You may also choose to restrict the environment to your main branch. For more detailed instructions, please see here.
If you have a free version of GitHub no action is needed, but approval process will not be enabled.
.gitlab-ci.yml
is useful if you are planning to run a pipeline on GitLab and implement the GitOps approach.
Remove the .example
part from the name of the file .gitlab-ci.yml
for the GitLab pipeline to work.
Note, that you will need to add variables such as AWS_ACCESS_KEY_ID, AWS_DEFAULT_REGION, and AWS_SECRET_ACCESS_KEY in your GitLab projects CI/CD settings section to run your pipeline.
Therefore, you will need to create a service user in advance, using AWS Identity and Access Management (IAM) to get values for these variables and assign an access policy to the user to be able to operate with your resources.
You can delete .gitlab-ci.yml
if you are not planning to use the GitLab pipeline.
- Terraform Unit Tests
This workflow executes a series of unit tests on the infrastructure code and is triggered by each commit. It begins by running terraform fmt to ensure proper code formatting and adherence to terraform best practices. Subsequently, it performs terraform validate to check for syntactical correctness and internal consistency of the code.
To further enhance the code quality and security, two additional tools, tfsec and tflint, are utilized:
tfsec: This step checks the code for potential security issues using tfsec, an open-source security scanner for Terraform. It helps identify any security vulnerabilities or misconfigurations in the infrastructure code.
tflint: This step employs tflint, a Terraform linting tool, to perform additional static code analysis and linting on the Terraform code. It helps detect potential issues and ensures adherence to best practices and coding standards specific to Terraform.
- Terraform Plan / Apply
To ensure accuracy and control over the changes made to your infrastructure, it is essential to manually initiate the job for applying the configuration. Before proceeding with the application, it is crucial to carefully review the generated plan. This step allows you to verify that the proposed changes align with your intended modifications to the infrastructure. By manually reviewing and approving the plan, you can confidently ensure that only the intended modifications will be implemented, mitigating any potential risks or unintended consequences.
Follow these steps to commit changes and trigger the pipeline:
-
Install pre-commit hooks: Make sure you have all tools from the requirements section installed.
-
Clone the Git repository (If you haven't already):
git clone <repository-url>
- Navigate to the repository directory:
cd <repository-directory>
- Create a new branch:
git checkout -b <new-feature-branch-name>
-
Make changes to the Terraform files as needed.
-
Run pre-commit hooks: Before committing, run the pre-commit hooks to check for formatting and validation issues:
pre-commit run --all-files
-
Fix any issues: If the pre-commit hooks report any issues, fix them and re-run the hooks until they pass.
-
Stage and commit the changes:
git add .
git commit -m "Your commit message describing the changes"
- Push the changes to the repository:
git push origin <branch-name>
Replace <branch-name>
with the name of the branch you are working on (e.g., new-feature-branch-name
).
-
Monitor the pipeline: After pushing the changes, the pipeline will be triggered automatically. You can monitor the progress of the pipeline and check for any issues in the CI/CD interface.
-
Merge Request: If the pipeline is successful and the changes are on a feature branch, create a Merge Request to merge the changes into the main branch. If the pipeline fails, investigate the issue, fix it, and push the changes again to re-trigger the pipeline. Once the merge request is created, your team can review the changes, provide feedback, and approve or request changes. After the merge request has been reviewed and approved, it can be merged into the main branch to apply the changes to the production infrastructure.
hey everyone,
πΎ Iβve been in the IT game for over 20 years, cutting my teeth with some big names like IBM, Thales, and Amazon. These days, I wear the hat of a DevOps Consultant and Team Lead, but what really gets me going is Docker and container technology - Iβm kind of obsessed!
π I have my own IT blog, where Iβve built a community of DevOps enthusiasts who share my love for all things Docker, containers, and IT technologies in general. And to make sure everyone can jump on this awesome DevOps train, I write super detailed guides (seriously, theyβre foolproof!) that help even newbies deploy and manage complex IT solutions.
π My dream is to empower every single person in the DevOps community to squeeze every last drop of potential out of Docker and container tech.
π³ As a Docker Captain, Iβm stoked to share my knowledge, experiences, and a good dose of passion for the tech. My aim is to encourage learning, innovation, and growth, and to inspire the next generation of IT whizz-kids to push Docker and container tech to its limits.
Letβs do this together!
πΉοΈ Click into sre.gg β my virtual space is a 2D pixel-art portfolio inviting you to interact with elements that encapsulate the milestones of my DevOps career.
π Dive into my comprehensive IT courses designed for enthusiasts and professionals alike. Whether you're looking to master Docker, conquer Kubernetes, or advance your DevOps skills, my courses provide a structured pathway to enhancing your technical prowess.
π Each course is built from the ground up with real-world scenarios in mind, ensuring that you gain practical knowledge and hands-on experience. From beginners to seasoned professionals, there's something here for everyone to elevate their IT skills.
πΌ Take a look at my service catalog and find out how we can make your technological life better. Whether it's increasing the efficiency of your IT infrastructure, advancing your career, or expanding your technological horizons β I'm here to help you achieve your goals. From DevOps transformations to building gaming computers β let's make your technology unparalleled!
π Join my Patreon and dive deep into the world of Docker and DevOps with exclusive content tailored for IT enthusiasts and professionals. As your experienced guide, I offer a range of membership tiers designed to suit everyone from newbies to IT experts.
π Check out my collection of essential DevOps books
π₯οΈ Check out my studio streaming and recording kit
π‘ Check out my streaming starter kit
π¬ YouTube
π¦ X / Twitter
π¨ Instagram
π Mastodon
𧡠Threads
πΈ Facebook
π§ Bluesky
π₯ TikTok
π» LinkedIn
π£ daily.dev Squad
𧩠LeetCode
π GitHub
πΎ Discord
π PayPal
π Patreon
π GitHub
π₯€ BuyMeaCoffee
πͺ Ko-fi
π Bitcoin (BTC): bc1q2fq0k2lvdythdrj4ep20metjwnjuf7wccpckxc
πΉ Ethereum (ETH): 0x76C936F9366Fad39769CA5285b0Af1d975adacB8
πͺ Binance Coin (BNB): bnb1xnn6gg63lr2dgufngfr0lkq39kz8qltjt2v2g6
π Litecoin (LTC): LMGrhx8Jsx73h1pWY9FE8GB46nBytjvz8g