As a PhD researcher in the relevant field, I believe I’m well-suited person for this task. Below is my proposal:
MILESTONE 1:
MODEL CHOICE: DistilBERT or TinyLlama, both efficient for local CPU use and capable of fine-tuning of various NLP tasks.
• CAPABILITIES: The model will generate diverse question formats including True/False, Multiple Choice, Match the Following, Fill in the blanks.
• PROMPT ENGINEERING: Techniques of zero-shot, one-shot, and few-shot learning will be demonstrated.
• OFFICIAL DOCUMENTATION:
DistilBERT GitHub & TinyLlama Documentation
MILESTONE 2:
• DATASET PREPARATION: I will curate a comprehensive dataset suitable for fine-tuning, covering a wide range of topics.
• FINE-TUNING EXECUTION: The model will be fine-tuned, with performance metrics (graphs of training, evaluation, test loss) provided.
• EVALUATION COMPARISON: I will show noticeable improvements before and after fine-tuning and store the results in a database like SQLite or MongoDB.
• OFFICIAL DOCUMENTATION: Hugging Face Fine-Tuning Guides
MILESTONE 3:
• REPORT PREPARATION: A detailed 25-page technical report will cover model selection rationale, dataset preparation, fine-tuning details, evaluation results, and conclusions.
MILESTONE 4:
I will develop an interactive quiz using Gradio, Streamlit, or Flask to display questions and track user progress.
TIMELINE & COMMUNICATIONS:
Within 1 week, I'm committed to regular updates and welcome feedback to ensure your expectations are met. Thanks