-
-
Notifications
You must be signed in to change notification settings - Fork 4.2k
Release testing quickstart guide
Testing features is one of the easiest ways to get familiar with the website and the various features. It makes it easy to then understand which parts of the code link to which component, and thus can improve your understanding of the codebase.
At Oppia, we do monthly releases and during each release, we do a manual testing of the core user journeys we have. Anyone is welcome to participate in the release testing!
Here is a quick start guide that could help you get involved with the testing:
- Contact Nithesh (at nithesh2108@gmail.com). Hangouts or email is fine. Just mention that you would be interested to participate in the testing.
- Typically during the first week of the month, you’ll get an email from the release testing coordinator for the month mentioning the dates for the current release’s testing phase.
- If it is your first time doing the testing, you will be paired up with a tester who has done testing multiple times before. If you run into any issues while testing the feature, you can get in touch with them or the testing coordinator.
- This doc outlines the Critical User Journeys. You will be assigned a few of these to test during the release testing phase.
- You can go through these and try some of them yourself.
- Go through the rest of this page to understand how we do testing in Oppia!
Typically, we have 3 safeguards. While a contributor makes a PR, we expect that the contributor has thoroughly tested the changes made in the PR for functional correctness. The reviewer is expected to validate this (either through screenshots or by manually testing). This testing is important and mostly it doesn’t affect the release process. The vast majority of bugs caught during release testing can be traced back to how one particular change affected another change elsewhere in the codebase. To mitigate this, we have automated tests (backend, frontend, and e2e tests). The automated tests are necessary and good, but they are currently not complete. The ultimate goal is to have the complete codebase covered by automated tests and the release testing should be a very minimal effort. As we can’t completely rely on just the automated tests, we do a round of manual testing of what we call “Critical user journeys”. These are core pathways that cannot under any circumstance be broken. We have compiled a list of critical user journeys and we have listed out steps for each of them. As a tester you would be given an objective, like "Go to the library, search for an exploration by its name, and play it". As a tester you are expected to attempt to perform the action and report any issues you face while trying to perform the action.
These serve as the general skeleton for the test to be performed. A good “testing mentality” is what we look for in testers.
Basically, to develop a good testing mentality, you should have the mindset that you should break a given functionality. Try to find loopholes, craft intelligent inputs, devise corner cases which might break the functionality. For example, in the view where you can suggest changes to the exploration state, you can try to create a suggestion with no changes, remove a string and add it back so that it is the same, try with and without a suggestion message, etc. In each of these cases, you would expect certain behaviour and if the functionality differs, then it is a problem! A large portion of the bugs we fix are visual bugs, like missing padding, alignment is off, animations not working, etc. These are important to make sure that the page looks visually pleasing to the user. We had a bug once where the exploration player’s transition from one state to another was buggy (#4902). This wasn’t noticed during testing and was later caught and fixed outside of the testing team. Developing the skill to find such anomalies is key to becoming a good tester. Another group of issues occur on specific browsers or environments. There are issues that occur only on mobile, and others which happen only on safari and so on. So as part of the procedure, while testing we expect testers to test across multiple browsers. Learner facing features need to look and feel good on mobile and tablet environments as well. The learner facing views include the library, learner dashboard and the players. For examples of browser-specific issues, automatic text to speech translations wouldn’t work on safari. After one particular chrome update, it wouldn’t work on chrome mobile alone (worked perfectly fine on desktop). Debugging these are tricky and very interesting.
A list of general tips to keep in mind while testing is mentioned below!
- If anything doesn’t look right, it is probably a bug. Try to clearly describe what doesn’t look as expected to a third person and confirm with them that it is indeed something that isn’t expected.
- File issues on GitHub for any problems you see. This way you will be able to communicate to a larger section of people that you feel something is wrong, and also will be able to get their feedback on the issue. Even if you are unsure whether the issue you have encountered is even a valid bug or not, please open up an issue on Github. It’s better to find bugs as early as possible.
- We generally recommend testing across various browsers and various screen sizes. We also strongly recommend testing on a mobile/tablet if you have access to one. This will help us catch and reduce bugs across a variety of operating systems, and browser configurations. (Note: we don’t support IE, but we do support Edge). Also note, currently pages facing learners on Oppia are expected to work on mobiles/tabs. Creator views are designed for desktop only.
- Always keep the console open. Any error logged on the console is an issue. It would be better if you know the sequence of actions that led to the console error as this will speed up the debugging process. For mobile device testing, see our wiki page on mobile development for how to access console logs.
- Take screenshots/screen-recording of a bug when you see it. Providing a screenshot makes it easier for others to judge if what you see is indeed a bug.
- We give a general skeleton of what should be tested. This should be the bare minimum testing that we expect you to do. We also encourage you to digress from this “happy” path and see what outcomes you reach. Thinking-out-of-the-box can help catch many non-trivial bugs.
- Always remember that the testing should be from a users’ perspective, not a developers’ perspective. As a developer who knows the codebase quite well, certain UIs might be well understood by you. But a new user who lands on that page might need some hand-holding. File such problems also as issues!
- Another thing that we would recommend is to try learner facing views on low-speed internet (can be simulated using chrome dev tools). A large fraction of our students come from places with slower internet connections!
- If possible, try to guess or find out what’s causing the issue and try to debug a little of how that issue can be fixed and provide the information you have found while opening up the issue so that the person working on the issue has something to start with. And it’d be great if you can take up that issue yourself if you have found a working solution while debugging.
- When you see long lists, or views with multiple items (like the library page), try to test with multiple entities. It might work well for a few entities, but might break for a large number of entities (this includes page slowdowns, improper alignment of objects, broken animations, overflowing text, etc).
Have an idea for how to improve the wiki? Please help make our documentation better by following our instructions for contributing to the wiki.
Core documentation
- Oppia's mission
- Code of Conduct
- Get involved!
- How to report a bug
- Google Summer of Code 2024
- Hacktoberfest 2024
Developing Oppia
- FAQs
- How to get help
- Getting started with the project
- How the codebase is organized
- Making your first PR
- Debugging
- Testing
- Codebase policies and processes
- Guidelines for launching new features
- Guidelines for making an urgent fix (hotfix)
- Testing jobs and other features on production
- Guidelines for Developers with Write Access to the Oppia Repository
- Release schedule and other information
- Revert and Regression Policy
- Privacy aware programming
- Code review:
- Project organization:
- QA Testing:
- Design docs:
- Team-Specific Guides
- LaCE/CD:
- Developer Workflow:
Developer Reference
- Oppiabot
- Git cheat sheet
- Frontend
- Backend
- Backend Type Annotations
- Writing state migrations
- Calculating statistics
- Storage models
- Coding for speed in GAE
- Adding a new page
- Adding static assets
- Wipeout Implementation
- Notes on NDB Datastore transactions
- How to handle merging of change lists for exploration properties
- Instructions for editing roles or actions
- Protocol buffers
- Webpack
- Third-party libraries
- Extension frameworks
- Oppia-ml Extension
- Mobile development
- Performance testing
- Build process
- Best practices for leading Oppia teams
- Past Events