Skip to content

Commit

Permalink
Improve question generation efficiency In Response Relevancy metrics (#…
Browse files Browse the repository at this point in the history
…1810)

**Description**: This update refactors the _ascore method to use
asyncio.gather for parallel execution of question generation tasks. Key
improvements include:

**Issue** : When ResponseRelevancy evaluation loops based on the
strictness variable (default 3), which is taking longer than expected

**Performance Enhancement**: By processing all tasks concurrently, the
overall execution time is significantly reduced, especially for high
strictness values.
**Code Simplification**: The refactored code is more concise and aligns
better with Python's asynchronous programming patterns.
**Scalability**: The new implementation handles a larger number of tasks
without blocking, making it more suitable for use cases requiring high
concurrency.
**Backward Compatibility**: All existing functionality remains intact,
with no changes required to external interfaces.

This change improves the efficiency and maintainability of the codebase,
aligning with best practices for asynchronous programming in Python.

Result after resolution : Average 1 run of Example code based on
tutorial code (10 runs total) reduced to 4.35s => 2.82s
  • Loading branch information
hundredeuk2 authored Jan 7, 2025
1 parent c455c38 commit 4e732cf
Showing 1 changed file with 6 additions and 4 deletions.
10 changes: 6 additions & 4 deletions src/ragas/metrics/_answer_relevance.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@
SingleTurnMetric,
)
from ragas.prompt import PydanticPrompt
import asyncio

logger = logging.getLogger(__name__)

Expand Down Expand Up @@ -136,14 +137,15 @@ async def _ascore(self, row: t.Dict, callbacks: Callbacks) -> float:
assert self.llm is not None, "LLM is not set"

prompt_input = ResponseRelevanceInput(response=row["response"])
responses = []
for _ in range(self.strictness):
response = await self.question_generation.generate(
tasks = [
self.question_generation.generate(
data=prompt_input,
llm=self.llm,
callbacks=callbacks,
)
responses.append(response)
for _ in range(self.strictness)
]
responses = await asyncio.gather(*tasks)

return self._calculate_score(responses, row)

Expand Down

0 comments on commit 4e732cf

Please sign in to comment.