-
Notifications
You must be signed in to change notification settings - Fork 9.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
while running terraform destroy it deletes the EBS volume as well along with the instance, we do not want that for production. The EBS volume should just be detached and not destroyed/deleted #4293
Comments
I would say this is working as intended - you're declaring the EBS volume resource and its lifecycle is being managed by Terraform. Since what you're requested would lead to Terraform leaving unmanaged resources behind, what you may want to do is create the volume manually and use a data source to import its settings and work with it from there. Looking at the original PR that introduced skip_destroy on the volume attachment resource, the intended use case is not to preserve the volume itself, but to skip the destruction of the actual attachment to the instance so that the filesystem can be left in a consistent state on an externally-managed volume when the instance is destroyed. If you declare the EBS volume as a resource on the same lifecycle then it's going to get destroyed regardless of the setting on the aws_ebs_volume_attachment. |
The workaround is a bit cumbersome in my opinion. In my case I've built a Jenkins server and the separate EBS volume for the data drive using terraform. In the event that we'll need to destroy the Jenkins server but wish to move the data drive to a new machine to be attached there we simply can't do so via Terraform, as the data drive is recommended to be destroyed when the Jenkins server is destroyed, even with skip_destroy=true set on aws_ebs_volume_attachment, and prevent_destroy=true on the aws_ebs_volume:
|
A solution for this would be good to see. For example, in cloudformation its possible to set a condition for some resources to keep them after a template is deleted. it makes a lot of sense for ebs volumes, and to be able to maintain the volume id / mount point somewhere so that it can easily be reattached upon the next apply. |
could you set the ebs and the instance on their own terraform states and make the ebs volume available to the instance via remote-state ? I will give this a test see if it works |
@videte47 did your test work ? |
Although this was problematic in my own scenario, ansible solved the problem in my workflow - https://medium.com/faun/attaching-a-persistent-ebs-volume-to-a-self-healing-instance-with-ansible-d0140431a22a |
I concur, data IS more important than the Terraform state, you can recreate the resource, but not the data lost, just as AWS allows you to destroy an EC2 it warns volumes will not be destroyed that same behavior should be respected in Terraform. |
It all depends on the use case. If you are deploying configuration as code and the instance deployed state is immutable, then you probably do want to delete the EBS volume more often than not. If its being manually configured or its a remote workstation / cloud9 instance, then preservation of the EBS volume could more often be desirable. |
ebs_block_device is capable of data persistence |
Marking this issue as stale due to inactivity. This helps our maintainers find and focus on the active issues. If this issue receives no comments in the next 30 days it will automatically be closed. Maintainers can also remove the stale label. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thank you! |
This issue was originally opened by @ruchikhanuja as hashicorp/terraform#17889. It was migrated here as a result of the provider split. The original body of the issue is below.
Terraform Version
Terraform v0.11.7
Terraform Configuration Files
resource "aws_ebs_volume" "storage" {
availability_zone = "${data.aws_subnet.this.availability_zone}"
type = "${var.ebs_storage_type}"
size = "${var.ebs_storage_size}"
}
resource "aws_volume_attachment" "ebs_assoc" {
depends_on = ["aws_ebs_volume.storage"]
device_name = "/xyz"
volume_id = "${aws_ebs_volume.storage.*.id[count.index]}"
instance_id = "${module.brokers.instance_ids[count.index]}"
skip_destroy = true
}
Debug Output
Crash Output
Expected Behavior
when do terraform destroy only the instance should be destroyed and not the EBS volume
Actual Behavior
with terraform destroy EBS volume is also getting deleted, though skip_destroy = true
Steps to Reproduce
Additional Context
References
The text was updated successfully, but these errors were encountered: