Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multi thread Block Device Tests Fix - Ensure unique block address #9650

Merged

Conversation

offirko
Copy link
Contributor

@offirko offirko commented Feb 10, 2019

Description

Fix multi-threaded block device tests to ensure unique block address per test

Pull request type

[X] Fix
[ ] Refactor
[ ] Target update
[ ] Functionality change
[ ] Docs update
[ ] Test update
[ ] Breaking change

@offirko offirko changed the title General BD Test Fix - Ensure unique block address Multi thread Block Device Tests Fix - Ensure unique block address Feb 10, 2019
@offirko
Copy link
Contributor Author

offirko commented Feb 10, 2019

@ARMmbed/mbed-os-maintainers , @ARMmbed/mbed-os-storage - please review.

@ciarmcom
Copy link
Member

@offirko, thank you for your changes.
@ARMmbed/mbed-os-storage @ARMmbed/mbed-os-maintainers please review.

@ciarmcom ciarmcom requested review from a team February 10, 2019 10:00

// Make sure block address per each test is unique
static unsigned block_seed = 1;
srand(block_seed++);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess this is sufficient (block seed) for the test purpose or rather using time like getting from us ticker ?

Copy link
Contributor Author

@offirko offirko Feb 11, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't want to make a drastic change. I think this should be sufficient for this test purpose.
We simply need ~50 different addresses for this test, they don't have to be random per each test.I'm saying ~50 ((5 Threads x 10 iterations) because addresses may repeat as long as they do not run concurrently for different threads.

Because seed/rand are not really random, I believe once the test runs successfully it will remain stable. Using time will actually 'random' and then we might have collision issue (very very low probability as David mentioned below)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add this to the commit, will help

@davidsaada
Copy link
Contributor

@offirko As far as I manage to understand, this solution decreases (very much) the probability of the problem, but doesn't eliminate it. The problem as I see it is the fact that multiple threads try to access the same block during the test. Theoretically, this can happen even with different seeds, as the block address is a modulo of the block device size. Maybe it would be better to make sure that the threads get different random block address before they start their work (maybe with a parameter).

@offirko
Copy link
Contributor Author

offirko commented Feb 11, 2019

@davidsaada - pls see my answer to Martin above.
If I would have used "random" seeds then address collision probability would have been very very low (0.3%).. but because the seed selection is fixed then its not really random. I believe once the test runs successfully it will remain stable.

The addresses dont have to change each run, just be unique.
Because address seeds repeat themselves each run, rand() will
produce repeating results that should not collide between
different threads.
@offirko offirko force-pushed the offir_fix_random_block_in_general_bd_test branch from a68dcbc to be1c887 Compare February 11, 2019 12:48
@0xc0170
Copy link
Contributor

0xc0170 commented Feb 11, 2019

Ci started

@mbed-ci
Copy link

mbed-ci commented Feb 11, 2019

Test run: SUCCESS

Summary: 12 of 12 test jobs passed
Build number : 1
Build artifacts

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants