Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sys/net/gnrc: add gnrc_tx_sync for gnrc_sixlowpan_frag_sfr #16090

Merged
merged 2 commits into from
May 24, 2022

Conversation

maribu
Copy link
Member

@maribu maribu commented Feb 24, 2021

Contribution description

Implement TX synchronization for gnrc_sixlowpan_frag_sfr and also extend the corresponding test.

Testing procedure

tests/gnrc_tx_sync should pass

Issues/PRs references

None

@maribu maribu requested a review from miri64 February 24, 2021 20:20
@maribu maribu added the Area: network Area: Networking label Feb 24, 2021
@miri64 miri64 self-assigned this Feb 24, 2021
@miri64 miri64 added the Type: enhancement The issue suggests enhanceable parts / The PR enhances parts of the codebase / documentation label Feb 24, 2021
@miri64
Copy link
Member

miri64 commented Feb 24, 2021

Question for understanding: Why does the first fragment send function doesn't require adaptation?

@maribu
Copy link
Member Author

maribu commented Feb 24, 2021

For all fragments except for the last the idea is to strip off the TX sync snip prior to the fragmentation and reattach it afterwards. If the packet gets dropped outside of the fragmentation code for whatever reason, the TX snip will also be freed which unblocks the thread waiting for the transmission attempt to conclude.

For the very last fragment, the tx snip will be attached to the fragment to send and no longer re-attached to the packet. Once the last fragment to send is freed by the network device (that is, after TX of that is completed), which triggers the unblocking of the thread syncing with TX.

Since fragmentation is only done if the packet doesn't fit into a single frame, we know that the first fragment cannot also be the last. And only the last fragment needs the special treatment.

@maribu
Copy link
Member Author

maribu commented Feb 24, 2021

To be honest: I'm not entirely sure if this approach is indeed the most elegant or efficient way to do this. I didn't dive deeply into the code and only have a rough idea how the fragmentation works. From that, the approach taken here seemed to me to be the most straight forward way to implement this feature.

I think it would totally be possible to restructure the code to keep the tx sync snip in the packet and only strip it of for the very last fragment and attach it there. But I'm not certain enough that this might not play well with the code copying the data from the packet to the fragments, if there is now a tx sync snip at the end of it. (The TX sync snip reports a data length of zero in the hope that this prevents its contents to be ever send out - which might actually be good enough here.)

@miri64
Copy link
Member

miri64 commented Feb 24, 2021

Since fragmentation is only done if the packet doesn't fit into a single frame, we know that the first fragment cannot also be the last. And only the last fragment needs the special treatment.

I know I go into very theoretical territory with this (and talking about things that are not even implemented), but do we actually know that? For classic fragmentation granted, we do not want to use fragmentation where it is not needed, but one could argue that a user also wants the benefits of fragment recovery for a single frame. I can imagine use cases where end-to-end (or end-to-gateway) ACKs for all 6LoWPAN data might be desirable.

@maribu
Copy link
Member Author

maribu commented Feb 24, 2021

OK, I see. I also adapted the 1st fragment routine and fixes two bugs of the previous version.

I'll mark this as WIP for now. I need to extend the test application a bit (so that the two bugs of the previous version would be caught) and I have the feeling that there is still something odd.

@maribu maribu added the State: WIP State: The PR is still work-in-progress and its code is not in its final presentable form yet label Feb 24, 2021
@miri64
Copy link
Member

miri64 commented Feb 25, 2021

Thanks for addressing my theoretical worries 😅

@MrKevinWeiss MrKevinWeiss added this to the Release 2021.07 milestone Jun 21, 2021
@MrKevinWeiss MrKevinWeiss removed this from the Release 2021.07 milestone Jul 15, 2021
@stale
Copy link

stale bot commented Mar 2, 2022

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. If you want me to ignore this issue, please mark it with the "State: don't stale" label. Thank you for your contributions.

@stale stale bot added the State: stale State: The issue / PR has no activity for >185 days label Mar 2, 2022
@stale stale bot closed this Apr 16, 2022
@maribu maribu reopened this Apr 16, 2022
@stale stale bot removed the State: stale State: The issue / PR has no activity for >185 days label Apr 16, 2022
@github-actions github-actions bot added Area: sys Area: System Area: tests Area: tests and testing framework labels Apr 16, 2022
@maribu
Copy link
Member Author

maribu commented Apr 16, 2022

This is still very much needed, I will try to give this more priority

Implement TX synchronization for gnrc_sixlowpan_frag_sfr
@maribu maribu force-pushed the gnrc_tx_sync_frag_sfr branch from 9135a3c to a9b4b90 Compare May 17, 2022 13:24
@maribu maribu added CI: ready for build If set, CI server will compile all applications for all available boards for the labeled PR and removed State: WIP State: The PR is still work-in-progress and its code is not in its final presentable form yet labels May 17, 2022
tests/gnrc_tx_sync/main.c Outdated Show resolved Hide resolved
@maribu maribu force-pushed the gnrc_tx_sync_frag_sfr branch from a9b4b90 to 30bf790 Compare May 17, 2022 13:37
@maribu
Copy link
Member Author

maribu commented May 17, 2022

I spiced up the test application to actually make sure that the test message payload is passed through. It does (intentionally) not parse the data send with the netdev, but simply checks if one of the iolist chunk ends with more than 4 bytes of the next expected payload chunk. This way, retransmissions are no longer counted and test will indeed only pass if all of the payload content is send.

Hence, a situation in which a retransmission was miscounted as the next chunk by the test and the test incorrectly passed should no longer occur. I am therefore confident that this indeed works as expected.

Copy link
Member

@miri64 miri64 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code looks good to me, normal operation is not affected by this change and I trust your testing. ACK

@miri64
Copy link
Member

miri64 commented May 23, 2022

Please squash.

@maribu maribu force-pushed the gnrc_tx_sync_frag_sfr branch from e01ae50 to 902aa8e Compare May 23, 2022 13:37
Testing with SFR can controlled via `TEST_FRAG_SFR={0,1}`, just like
testing with 6LoWPAN was controlled via `TEST_6LO={0,1}`.
@maribu maribu force-pushed the gnrc_tx_sync_frag_sfr branch from 902aa8e to adad17e Compare May 23, 2022 18:10
@miri64 miri64 enabled auto-merge May 23, 2022 18:13
@miri64 miri64 merged commit 078bced into RIOT-OS:master May 24, 2022
@maribu maribu deleted the gnrc_tx_sync_frag_sfr branch May 24, 2022 04:59
@maribu
Copy link
Member Author

maribu commented May 24, 2022

Thanks (:

@chrysn chrysn added this to the Release 2022.07 milestone Aug 25, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Area: network Area: Networking Area: sys Area: System Area: tests Area: tests and testing framework CI: ready for build If set, CI server will compile all applications for all available boards for the labeled PR Type: enhancement The issue suggests enhanceable parts / The PR enhances parts of the codebase / documentation
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants