-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WEB-2600] fix: estimate point deletion #5762
Conversation
WalkthroughThe pull request introduces several enhancements across multiple files, focusing on the functionality of estimate points, project updates, and activity tracking. Key changes include a refined Changes
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
🧹 Outside diff range and nitpick comments (6)
apiserver/plane/app/views/project/base.py (2)
Line range hint
441-458
: New inbox and triage state creationThe method now creates a default inbox and triage state if they don't exist. This is a good addition for consistency in project management.
However, consider adding error handling for the inbox and state creation processes.
Consider wrapping the inbox and state creation in a try-except block:
if serializer.data["inbox_view"]: + try: inbox = Inbox.objects.filter( project=project, is_default=True, ).first() if not inbox: Inbox.objects.create( name=f"{project.name} Inbox", project=project, is_default=True, ) # Create the triage state in Backlog group State.objects.get_or_create( name="Triage", group="triage", description="Default state for managing all Inbox Issues", project_id=pk, color="#ff7700", is_triage=True, ) + except Exception as e: + return Response( + {"error": f"Failed to create inbox or triage state: {str(e)}"}, + status=status.HTTP_500_INTERNAL_SERVER_ERROR + )
Line range hint
460-472
: Add logging for new functionalityConsider adding logging statements for the creation of the inbox and triage state. This will help with debugging and monitoring the system's behavior.
Add logging statements after the inbox and state creation:
if serializer.data["inbox_view"]: inbox = Inbox.objects.filter( project=project, is_default=True, ).first() if not inbox: Inbox.objects.create( name=f"{project.name} Inbox", project=project, is_default=True, ) + logger.info(f"Created default inbox for project {project.id}") # Create the triage state in Backlog group State.objects.get_or_create( name="Triage", group="triage", description="Default state for managing all Inbox Issues", project_id=pk, color="#ff7700", is_triage=True, ) + logger.info(f"Created or got triage state for project {project.id}")apiserver/plane/app/views/estimate/base.py (4)
263-287
: Optimize Task Queue ProcessingCurrently,
issue_activity.delay()
is called within a loop for each issue. If there are many issues, this will enqueue a large number of tasks, which could overwhelm the task processing system.[performance]
Consider batching the activity logging:
- Option 1: Modify
issue_activity
to accept a list of issues and process them in a single task.- Option 2: Use group tasks or chunking provided by your task queue (e.g., Celery) to manage the workload efficiently.
This change will improve the scalability and performance of your application.
267-285
: Simplifyrequested_data
andcurrent_instance
AssignmentsThe use of complex ternary expressions within the JSON dumps can be simplified for readability.
For example, in
requested_data
:"estimate_point": new_estimate_pointSince
new_estimate_point
is already set appropriately before the loop, you can use it directly.Similarly, in
current_instance
:"estimate_point": str(issue.estimate_point_id) if issue.estimate_point_id else NoneThis expression is acceptable, but consider assigning it to a variable for clarity:
current_estimate_point = ( str(issue.estimate_point_id) if issue.estimate_point_id else None )And then:
"estimate_point": current_estimate_pointThis makes the code cleaner and easier to read.
Also applies to: 298-316
306-316
: Time Zone AwarenessWhen using
timezone.now().timestamp()
, ensure that the timestamp is time zone aware and consistent across your application.If your application relies on UTC, you might want to explicitly use
timezone.utc
ordatetime.utcnow()
.epoch = int(datetime.datetime.utcnow().timestamp())Ensure consistency with your application's time zone settings.
Line range hint
232-238
: Validate Unique Keys for Estimate PointsIn the
create
andpartial_update
methods forEstimatePointEndpoint
, there is a TODO comment about adding key validation for duplicates.Implement key uniqueness validation to prevent multiple estimate points with the same
key
in an estimate.You can add a validation check:
if EstimatePoint.objects.filter( estimate_id=estimate_id, key=request.data.get("key") ).exists(): return Response( {"error": "An estimate point with this key already exists."}, status=status.HTTP_400_BAD_REQUEST, )Would you like assistance in implementing this validation?
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (4)
- apiserver/plane/app/views/estimate/base.py (3 hunks)
- apiserver/plane/app/views/project/base.py (1 hunks)
- apiserver/plane/bgtasks/deletion_task.py (3 hunks)
- apiserver/plane/bgtasks/issue_activities_task.py (2 hunks)
🧰 Additional context used
🔇 Additional comments (8)
apiserver/plane/bgtasks/deletion_task.py (4)
5-5
: Necessary Import ofmodels
The import of
models
fromdjango.db
is essential for accessingmodels.CASCADE
in the updated soft deletion logic below.
22-26
: Correct Implementation ofon_delete
CheckAdding the check for
on_delete=models.CASCADE
ensures that only related objects intended to be deleted via cascading are soft-deleted. This enhances the accuracy of the soft deletion process and prevents unintended deletions.
27-35
: Proper Handling of Related ObjectsThe code correctly handles both
one_to_many
andone_to_one
relationships when retrieving related objects for soft deletion. This comprehensive approach ensures that all relevant related objects are considered.
37-40
: Soft Deleting Related ObjectsThe loop appropriately sets the
deleted_at
timestamp for each related object and saves it using the specified database alias. This effectively performs a soft delete on the related objects.apiserver/plane/app/views/estimate/base.py (3)
263-287
: Handle Potential Race ConditionsUpdating issues and logging activities separately could lead to race conditions if other processes are modifying the same issues concurrently.
[concurrency]
Consider wrapping the issue updates and activity logging within a transaction to ensure atomicity:
from django.db import transaction with transaction.atomic(): # Update issues issues.update(estimate_point_id=new_estimate_point) # Log activities for issue in issues: issue_activity.delay(...)This approach ensures data consistency and integrity.
Also applies to: 296-316
3-6
: Unused ImportsVerify whether the imported modules
json
andtimezone
are necessary.
json
: Used injson.dumps()
forrequested_data
andcurrent_instance
.timezone
: Used fortimezone.now().timestamp()
.Since both are used in the new code, the imports are appropriate.
Line range hint
340-352
: Ensure Deletion of Old Estimate Point Does Not Affect Key OrderingAfter deleting the
old_estimate_point
, you adjust the keys of remaining estimate points. This assumes that thekey
field is used for ordering and has to be sequential.Consider whether adjusting the
key
values is necessary or if it could lead to issues elsewhere:
- If
key
is meant to be a unique identifier, changing it might cause confusion.- If ordering is important, consider using an explicit ordering field or relying on the database ordering mechanisms.
Verify the impact of this change on other parts of the application.
apiserver/plane/bgtasks/issue_activities_task.py (1)
468-468
: Correctly setverb
based on estimate point changesThe change appropriately sets the
verb
to"removed"
whennew_estimate
isNone
, accurately reflecting the removal of an estimate point. This ensures that activity logging remains precise.
@@ -413,7 +413,7 @@ def create(self, request, slug): | |||
status=status.HTTP_410_GONE, | |||
) | |||
|
|||
@allow_permission([ROLE.ADMIN, ROLE.MEMBER], level="WORKSPACE") | |||
@allow_permission([ROLE.ADMIN]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Restricted access to ADMIN role only
The partial_update
method now only allows ADMIN role access. This change enhances security but may impact existing workflows where MEMBER roles previously had update permissions.
Please ensure this change aligns with your project's access control requirements and update any related documentation or user guides accordingly.
@@ -154,8 +163,7 @@ def hard_delete(): | |||
if hasattr(model, "deleted_at"): | |||
# Get all instances where 'deleted_at' is greater than 30 days ago | |||
_ = model.all_objects.filter( | |||
deleted_at__lt=timezone.now() | |||
- timezone.timedelta(days=days) | |||
deleted_at__lt=timezone.now() - timezone.timedelta(days=days) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider Consolidating Hard Deletion Logic
In the hard_delete
function, you individually delete instances of specific models and then iterate over all models with a deleted_at
field to delete any remaining instances. This could lead to redundant deletion operations and impact performance.
Suggestion:
Evaluate whether the individual deletions of specific models are necessary if the final loop sufficiently handles the hard deletion of all models with a deleted_at
field. Consolidating the deletion logic might improve performance and simplify maintenance.
issues = Issue.objects.filter( | ||
project_id=project_id, | ||
workspace__slug=slug, | ||
estimate_point_id=estimate_point_id, | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure Issues are Updated to Remove Deleted Estimate Point
In the else
block where new_estimate_id
is not provided, the issues
are not updated to remove the reference to the deleted estimate_point_id
. This means the issues will still reference an estimate point that no longer exists, which could lead to integrity issues.
To fix this, add an issues.update(estimate_point_id=None)
after the loop in the else
block to set the estimate_point_id
to None
for all affected issues:
issues.update(estimate_point_id=None)
This ensures that all issues previously associated with the deleted estimate point no longer hold invalid references.
issues = Issue.objects.filter( | ||
project_id=project_id, | ||
workspace__slug=slug, | ||
estimate_point_id=estimate_point_id, | ||
) | ||
for issue in issues: | ||
issue_activity.delay( | ||
type="issue.activity.updated", | ||
requested_data=json.dumps( | ||
{ | ||
"estimate_point": ( | ||
str(new_estimate_id) | ||
if new_estimate_id | ||
else None | ||
), | ||
} | ||
), | ||
actor_id=str(request.user.id), | ||
issue_id=issue.id, | ||
project_id=str(project_id), | ||
current_instance=json.dumps( | ||
{ | ||
"estimate_point": ( | ||
str(issue.estimate_point_id) | ||
if issue.estimate_point_id | ||
else None | ||
), | ||
} | ||
), | ||
epoch=int(timezone.now().timestamp()), | ||
) | ||
issues.update(estimate_point_id=new_estimate_id) | ||
else: | ||
issues = Issue.objects.filter( | ||
project_id=project_id, | ||
workspace__slug=slug, | ||
estimate_point_id=estimate_point_id, | ||
).update(estimate_point_id=new_estimate_id) | ||
) | ||
for issue in issues: | ||
issue_activity.delay( | ||
type="issue.activity.updated", | ||
requested_data=json.dumps( | ||
{ | ||
"estimate_point": None, | ||
} | ||
), | ||
actor_id=str(request.user.id), | ||
issue_id=issue.id, | ||
project_id=str(project_id), | ||
current_instance=json.dumps( | ||
{ | ||
"estimate_point": ( | ||
str(issue.estimate_point_id) | ||
if issue.estimate_point_id | ||
else None | ||
), | ||
} | ||
), | ||
epoch=int(timezone.now().timestamp()), | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Refactor Repetitive Code in destroy
Method
The code inside the if new_estimate_id
and else
blocks shares significant similarities, particularly in how issues
are retrieved and how issue_activity.delay
is called. This duplication can make the code harder to maintain and more prone to errors.
You can refactor the code to eliminate duplication:
issues = Issue.objects.filter(
project_id=project_id,
workspace__slug=slug,
estimate_point_id=estimate_point_id,
)
new_estimate_point = str(new_estimate_id) if new_estimate_id else None
for issue in issues:
issue_activity.delay(
type="issue.activity.updated",
requested_data=json.dumps({"estimate_point": new_estimate_point}),
actor_id=str(request.user.id),
issue_id=issue.id,
project_id=str(project_id),
current_instance=json.dumps(
{
"estimate_point": (
str(issue.estimate_point_id)
if issue.estimate_point_id
else None
)
}
),
epoch=int(timezone.now().timestamp()),
)
if new_estimate_id:
issues.update(estimate_point_id=new_estimate_id)
else:
issues.update(estimate_point_id=None)
This refactoring reduces code duplication and makes the logic clearer.
), | ||
epoch=int(timezone.now().timestamp()), | ||
) | ||
issues.update(estimate_point_id=new_estimate_id) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consistent Issue Updates
Ensure that the issues.update(...)
method is called in both the if
and else
blocks and placed outside the loop. In the current code, the update in the else
block is missing.
Add the following after line 316:
issues.update(estimate_point_id=None)
This ensures that all issues have their estimate_point_id
set to None
when no new_estimate_id
is provided.
Also applies to: 317-317
else "inbox_issue" if inbox else "issue" | ||
), | ||
event_id=( | ||
activity.issue_comment_id | ||
if activity.field == "comment" | ||
else inbox | ||
if inbox | ||
else activity.issue_id | ||
else inbox if inbox else activity.issue_id |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Refactor nested ternary operators for improved readability
The nested ternary operators in the assignments of event
and event_id
reduce code readability and can make maintenance challenging. Consider refactoring the code to enhance clarity.
Apply this diff to improve readability:
if len(issue_activities_created):
for activity in issue_activities_created:
+ if activity.field == "comment":
+ event = "issue_comment"
+ event_id = activity.issue_comment_id
+ elif inbox:
+ event = "inbox_issue"
+ event_id = inbox
+ else:
+ event = "issue"
+ event_id = activity.issue_id
webhook_activity.delay(
- event=(
- "issue_comment"
- if activity.field == "comment"
- else "inbox_issue" if inbox else "issue"
- ),
- event_id=(
- activity.issue_comment_id
- if activity.field == "comment"
- else inbox if inbox else activity.issue_id
- ),
+ event=event,
+ event_id=event_id,
verb=activity.verb,
field=(
"description"
if activity.field == "comment"
else activity.field
),
old_value=(
activity.old_value
if activity.old_value != ""
else None
),
new_value=(
activity.new_value
if activity.new_value != ""
else None
),
actor_id=activity.actor_id,
current_site=origin,
slug=activity.workspace.slug,
old_identifier=activity.old_identifier,
new_identifier=activity.new_identifier,
)
This refactoring assigns event
and event_id
before the webhook_activity.delay
call, enhancing the readability and maintainability of the code.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
else "inbox_issue" if inbox else "issue" | |
), | |
event_id=( | |
activity.issue_comment_id | |
if activity.field == "comment" | |
else inbox | |
if inbox | |
else activity.issue_id | |
else inbox if inbox else activity.issue_id | |
if activity.field == "comment": | |
event = "issue_comment" | |
event_id = activity.issue_comment_id | |
elif inbox: | |
event = "inbox_issue" | |
event_id = inbox | |
else: | |
event = "issue" | |
event_id = activity.issue_id | |
webhook_activity.delay( | |
event=event, | |
event_id=event_id, | |
verb=activity.verb, | |
field=( | |
"description" | |
if activity.field == "comment" | |
else activity.field | |
), | |
old_value=( | |
activity.old_value | |
if activity.old_value != "" | |
else None | |
), | |
new_value=( | |
activity.new_value | |
if activity.new_value != "" | |
else None | |
), | |
actor_id=activity.actor_id, | |
current_site=origin, | |
slug=activity.workspace.slug, | |
old_identifier=activity.old_identifier, | |
new_identifier=activity.new_identifier, | |
) |
fix:
Issue Link: WEB-2600
Summary by CodeRabbit
Release Notes
New Features
Bug Fixes
Chores