-
Notifications
You must be signed in to change notification settings - Fork 271
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
outbound: skip logical stacks when no profile is discovered #963
Conversation
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
This branch builds on #963 by changing the traffic split service to require its target type implement `Param<profiles::Receiver>` rather than requiring `Param<Option<profiles::Receiver>>`. This lets us simplify the implementation significantly by removing the "default" case that's built when no profile was discovered. Depends on #963 Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
This branch builds on #963 by changing the traffic split service to require its target type implement `Param<profiles::Receiver>` rather than requiring `Param<Option<profiles::Receiver>>`. This lets us simplify the implementation significantly by removing the "default" case that's built when no profile was discovered. Depends on #963 Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
This branch builds on #963 by changing the traffic split service to require its target type implement `Param<profiles::Receiver>` rather than requiring `Param<Option<profiles::Receiver>>`. This lets us simplify the implementation significantly by removing the "default" case that's built when no profile was discovered. Depends on #963 Signed-off-by: Eliza Weisman <eliza@buoyant.io>
(thanks @kleimkuhler) Signed-off-by: Eliza Weisman <eliza@buoyant.io>
sorry kevin it doesn't compile Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
This branch builds on #963 by changing the traffic split service to require its target type implement `Param<profiles::Receiver>` rather than requiring `Param<Option<profiles::Receiver>>`. This lets us simplify the implementation significantly by removing the "default" case that's built when no profile was discovered. Depends on #963 Signed-off-by: Eliza Weisman <eliza@buoyant.io>
This branch builds on #963 by changing the traffic split service to require its target type implement `Param<profiles::Receiver>` rather than requiring `Param<Option<profiles::Receiver>>`. This lets us simplify the implementation significantly by removing the "default" case that's built when no profile was discovered. Depends on #963 Signed-off-by: Eliza Weisman <eliza@buoyant.io>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good! Sounds good about a follow-up to remove more of the Option<profiles::Receiver>
s.
FWIW already started on this in #964 |
This is similar to PR #963 but on the inbound side. Since the inbound proxy only requires logical targets in the HTTP profile route layers, this change is much simpler --- protocol detection etc are all above the profile resolution layer. We can simply add an `UnwrapOr` layer that skips the profile route layers. This change is much less important than the corresponding outbound change, but the two changes together will permit us to simplify the profile route layer by taking a `Param<profiles::Receiver>` rather than a `Param<Option<profiles::Receiver>>`. This does also require adding an additional `BoxResponse` layer to erase differing response body types based on whether or not profile route metrics are recorded.
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
@olix0r okay, I've addressed all of your feedback & I think this is ready for another look when you have a chance! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is looking good though there's a bit of cleanup we should address. Doesn't necessarily have to be in this PR...
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
Signed-off-by: Eliza Weisman <eliza@buoyant.io>
This branch builds on #963 by changing the traffic split service to require its target type implement `Param<profiles::Receiver>` rather than requiring `Param<Option<profiles::Receiver>>`. This lets us simplify the implementation significantly by removing the "default" case that's built when no profile was discovered. Depends on #963 Signed-off-by: Eliza Weisman <eliza@buoyant.io>
This branch builds on #963 by changing the traffic split service to require its target type implement `Param<profiles::Receiver>` rather than requiring `Param<Option<profiles::Receiver>>`. This lets us simplify the implementation significantly by removing the "default" case that's built when no profile was discovered. Depends on #963 Signed-off-by: Eliza Weisman <eliza@buoyant.io>
This is similar to PR #963 but on the inbound side. Since the inbound proxy only requires logical targets in the HTTP profile route layers, this change is much simpler --- protocol detection etc are all above the profile resolution layer. We can simply add an `UnwrapOr` layer that skips the profile route layers. This change is much less important than the corresponding outbound change, but the two changes together will permit us to simplify the profile route layer by taking a `Param<profiles::Receiver>` rather than a `Param<Option<profiles::Receiver>>`. This does also require adding an additional `BoxResponse` layer to erase differing response body types based on whether or not profile route metrics are recorded.
This is similar to PR #963 but on the inbound side. Since the inbound proxy only requires logical targets in the HTTP profile route layers, this change is much simpler --- protocol detection etc are all above the profile resolution layer. We can simply add an `UnwrapOr` layer that skips the profile route layers. This change is much less important than the corresponding outbound change, but the two changes together will permit us to simplify the profile route layer by taking a `Param<profiles::Receiver>` rather than a `Param<Option<profiles::Receiver>>`. This does also require adding an additional `BoxResponse` layer to erase differing response body types based on whether or not profile route metrics are recorded.
This branch changes the `linkerd_service_profiles::http::route_request` layer to require a `Param<profiles::Receiver>` rather than a `Param<Option<profiles::Receiver>>` target type. This is possible because the inbound and outbound logical destination stacks are now only built when a profile is resolved (as of PRs #963 and #966). Depends on #963 Depends on #966
This branch builds on #963 by changing the traffic split service to require its target type implement `Param<profiles::Receiver>` rather than requiring `Param<Option<profiles::Receiver>>`. This lets us simplify the implementation significantly by removing the "default" case that's built when no profile was discovered. Depends on #963 Signed-off-by: Eliza Weisman <eliza@buoyant.io>
…966) This is similar to PR #963 but on the inbound side. Since the inbound proxy only requires logical targets in the HTTP profile route layers, this change is much simpler --- protocol detection etc are all above the profile resolution layer. We can simply add an `UnwrapOr` layer that skips the profile route layers. This change is much less important than the corresponding outbound change, but the two changes together will permit us to simplify the profile route layer by taking a `Param<profiles::Receiver>` rather than a `Param<Option<profiles::Receiver>>`. This does also require adding an additional `BoxResponse` layer to erase differing response body types based on whether or not profile route metrics are recorded.
This branch changes the `linkerd_service_profiles::http::route_request` layer to require a `Param<profiles::Receiver>` rather than a `Param<Option<profiles::Receiver>>` target type. This is possible because the inbound and outbound logical destination stacks are now only built when a profile is resolved (as of PRs #963 and #966). Depends on #963 Depends on #966
This branch changes the `linkerd_service_profiles::http::route_request` layer to require a `Param<profiles::Receiver>` rather than a `Param<Option<profiles::Receiver>>` target type. This is possible because the inbound and outbound logical destination stacks are now only built when a profile is resolved (as of PRs #963 and #966). Depends on #963 Depends on #966
…972) Depends on #971 Currently, the proxy should only perform `Destination.Get` resolutions for logical (named) addresses provided by `Destination.GetProfile` lookups, and never for IP addresses. However, this is not *guaranteed*; as the destination resolver still accepts a `LookupAddr` type which may be either a logical named address *or* an IP address. In order to remove support for IP lookups from the control plane's `Destination.Get` implementation, we should ensure that the proxy will *never* attempt to resolve an IP (see linkerd/linkerd2#5246). Now that the `Logical` target type and the logical stacks are only constructed when a profile is resolved (as of PR #963), we can now change the proxy to ensure statically that this never happens. This branch makes the following changes: * Currently (as of #963), the `Logical` target type has an `Option<LogicalAddr>`. We attempt to unwrap the `Option<Receiver>` returned by a profile lookup, and build a logical stack when there is a profile. The `Logical` target is with `Option<LogicalAddr>` from that profile. However, this is not actually the ideal behavior: ideally, we would only contruct the logical stack when there is a resolved profile *and* the profile includes a logical address. Therefore, this branch changes the `push_unwrap_logical` function added in #963 to use a custom filter predicate that only constructs the `Logical` target when the profile resolution is `Some` _and_ the profile's `addr` field is `Some`. Now, `Logical` targets are only built when there is a known logical address. * The `Destination.Get` resolver requires a target type that implements `Param<ConcreteAddr>`. Currently, a `ConcreteAddr` is a newtype around `Addr`, which may be either a named address or an IP address. This branch changes `ConcreteAddr` to a newtype around `NameAddr`. This means that the resolver now ensures `Destination.Get` queries are only performed for named addresses at the type level. * Similarly, the `Target` type in `linkerd-service-profiles` is changed to store a `NameAddr` rather than an `Addr`. * Several of the outbound stack tests now test invalid behavior that should no longer happen --- building a logical stack and calling it with a target that doesn't include a logical address. I rewrote these tests so that the tests without logical addresses don't build a logical stack, and added a new test for calling the logical stack with a logical address. In the process, I did some cleanup and refactoring in the outbound TCP stack tests. * Finally, while debugging test failures, I noticed that the `fmt::Debug` output for some of the target types that include `NameAddr`s was a little unnecesarily verbose. I added new custom `Debug` impls that should make these a bit more concise. This is the proxy component of linkerd/linkerd2#5246. Signed-off-by: Eliza Weisman <eliza@buoyant.io>
This release primarily improves protocol detection error messages in the admin server so that logs clearly indicate when the client expected a different TLS server identity than that of the running proxy. A number of internal improvements have been made, especially eliminating some potential runtime panics detected by oss-fuzz. It is not expected that these panics could be triggered in typical cluster configurations. --- * discover: replace `linkerd-channel` with `tokio-util` `PollSender` (linkerd/linkerd2-proxy#969) * replace `linkerd-channel` with `tokio-stream` (linkerd/linkerd2-proxy#970) * concurrency-limit: use `tokio-util`'s `PollSemaphore` (linkerd/linkerd2-proxy#968) * http: Do not fail fuzz tests when all IO is not read (linkerd/linkerd2-proxy#973) * transport: Fix orig-dst compilation on non-Linux targets (linkerd/linkerd2-proxy#974) * Update trust-dns to fix possible panic (linkerd/linkerd2-proxy#975) * addr: fix `to_http_authority` panic with IPv6 (linkerd/linkerd2-proxy#976) * outbound: skip logical stacks when no profile is discovered (linkerd/linkerd2-proxy#963) * split: change traffic splits to require a profile (linkerd/linkerd2-proxy#964) * inbound: only build profile route stacks when a profile is resolved (linkerd/linkerd2-proxy#966) * profiles: make receiver param in `route_request` non-optional (linkerd/linkerd2-proxy#967) * outbound: move target types into stack modules (linkerd/linkerd2-proxy#971) * outbound: only build logical stacks for profiles with logical addrs (linkerd/linkerd2-proxy#972) * app: inbound: add fuzzer (linkerd/linkerd2-proxy#977) * admin: Fail connections when HTTP detection fails (linkerd/linkerd2-proxy#979) * reduce error boilerplate with `thiserror` (linkerd/linkerd2-proxy#980) * app: inbound: fuzzer: generalise fuzzers and solve coverage build (linkerd/linkerd2-proxy#978) * admin: Assume meshed connections are HTTP/2 (linkerd/linkerd2-proxy#982) * chore: Fix deprecations on nightly (linkerd/linkerd2-proxy#983)
In #963 we began to split the outbound stack so that logical stack was only built when a profile resolution was successful. However, this incorrectly handled the case when a profile resolution included an endpoint in its response. This change introduces a `outbound::switch_logical` module that takes an optional profile and builds either an endpoint stack or a logical stack. Logical stacks always have a profile resolution and (named) logical address. Endpoint stacks are built otherwise, optionally with metadata from the profile resolution. In order to test this--and to simplify/focus tests, in general--I've moved tests from `outbound/src/{tcp,http}/tests.rs` into the modules they test. Now each test more narrowly targets the stacks whose logic they test, rather than composing an entire outbound stack. It was becoming cumbersome to support mocking each component of the larger stack. Now, we more intentionaly test targeted logic in each stack (including the newly fixed endpoint/logical stack switching). Supersedes #988
In #963 we began to split the outbound stack so that logical stack was only built when a profile resolution was successful. However, this incorrectly handled the case when a profile resolution included an endpoint in its response. This change introduces a `outbound::switch_logical` module that takes an optional profile and builds either an endpoint stack or a logical stack. Logical stacks always have a profile resolution and (named) logical address. Endpoint stacks are built otherwise, optionally with metadata from the profile resolution. In order to test this--and to simplify/focus tests, in general--I've moved tests from `outbound/src/{tcp,http}/tests.rs` into the modules they test. Now each test more narrowly targets the stacks whose logic they test, rather than composing an entire outbound stack. It was becoming cumbersome to support mocking each component of the larger stack. Now, we more intentionaly test targeted logic in each stack (including the newly fixed endpoint/logical stack switching).
This release simplifies internals so that endpoint-forwarding logic is completely distinct from handling of load balanced services. The ingress-mode outbound proxy has been simplified to *require* the `l5d-dst-override` header and to fail non-HTTP communication. This ensures that the ingress-mode proxy does not unexpectedly revert to insecure communication. Finally, a regression was recently introduced that caused all proxy logs to be output with ANSI control characters. Logs are now output in plaintext by default --- * discover: replace `linkerd-channel` with `tokio-util` `PollSender` (linkerd/linkerd2-proxy#969) * replace `linkerd-channel` with `tokio-stream` (linkerd/linkerd2-proxy#970) * concurrency-limit: use `tokio-util`'s `PollSemaphore` (linkerd/linkerd2-proxy#968) * http: Do not fail fuzz tests when all IO is not read (linkerd/linkerd2-proxy#973) * transport: Fix orig-dst compilation on non-Linux targets (linkerd/linkerd2-proxy#974) * Update trust-dns to fix possible panic (linkerd/linkerd2-proxy#975) * addr: fix `to_http_authority` panic with IPv6 (linkerd/linkerd2-proxy#976) * outbound: skip logical stacks when no profile is discovered (linkerd/linkerd2-proxy#963) * split: change traffic splits to require a profile (linkerd/linkerd2-proxy#964) * inbound: only build profile route stacks when a profile is resolved (linkerd/linkerd2-proxy#966) * profiles: make receiver param in `route_request` non-optional (linkerd/linkerd2-proxy#967) * outbound: move target types into stack modules (linkerd/linkerd2-proxy#971) * outbound: only build logical stacks for profiles with logical addrs (linkerd/linkerd2-proxy#972) * app: inbound: add fuzzer (linkerd/linkerd2-proxy#977) * admin: Fail connections when HTTP detection fails (linkerd/linkerd2-proxy#979) * reduce error boilerplate with `thiserror` (linkerd/linkerd2-proxy#980) * app: inbound: fuzzer: generalise fuzzers and solve coverage build (linkerd/linkerd2-proxy#978) * admin: Assume meshed connections are HTTP/2 (linkerd/linkerd2-proxy#982) * chore: Fix deprecations on nightly (linkerd/linkerd2-proxy#983) * fuzz: Add logging to fuzz targets (linkerd/linkerd2-proxy#985) * fuzz: don't panic when the proxy closes the conn (linkerd/linkerd2-proxy#986) * Commit lock files for fuzzers (linkerd/linkerd2-proxy#984) * fuzz: allow client requests to fail (linkerd/linkerd2-proxy#989) * tower: update dependency to 0.4.7 (linkerd/linkerd2-proxy#990) * outbound: Make the Endpoint::logical_addr field optional (linkerd/linkerd2-proxy#991) * trace: explicitly disable ANSI terminal colors (linkerd/linkerd2-proxy#994) * ingress: Require the l5d-dst-override header (linkerd/linkerd2-proxy#992) * outbound: Do not support TCP-forwarding in ingress-mode (linkerd/linkerd2-proxy#995) * Decouple tcp forward stack from Endpoint target (linkerd/linkerd2-proxy#996) * Pickup linkerd-await wrapper in docker build (linkerd/linkerd2-proxy#999) * docs: Add fuzzing report (linkerd/linkerd2-proxy#1000) * tests: Use io::Error in mocked connector (linkerd/linkerd2-proxy#1001) * outbound: Decouple endpoint & logical stacks (linkerd/linkerd2-proxy#1002)
This release simplifies internals so that endpoint-forwarding logic is completely distinct from handling of load balanced services. The ingress-mode outbound proxy has been simplified to *require* the `l5d-dst-override` header and to fail non-HTTP communication. This ensures that the ingress-mode proxy does not unexpectedly revert to insecure communication. Finally, a regression was recently introduced that caused all proxy logs to be output with ANSI control characters. Logs are now output in plaintext by default --- * discover: replace `linkerd-channel` with `tokio-util` `PollSender` (linkerd/linkerd2-proxy#969) * replace `linkerd-channel` with `tokio-stream` (linkerd/linkerd2-proxy#970) * concurrency-limit: use `tokio-util`'s `PollSemaphore` (linkerd/linkerd2-proxy#968) * http: Do not fail fuzz tests when all IO is not read (linkerd/linkerd2-proxy#973) * transport: Fix orig-dst compilation on non-Linux targets (linkerd/linkerd2-proxy#974) * Update trust-dns to fix possible panic (linkerd/linkerd2-proxy#975) * addr: fix `to_http_authority` panic with IPv6 (linkerd/linkerd2-proxy#976) * outbound: skip logical stacks when no profile is discovered (linkerd/linkerd2-proxy#963) * split: change traffic splits to require a profile (linkerd/linkerd2-proxy#964) * inbound: only build profile route stacks when a profile is resolved (linkerd/linkerd2-proxy#966) * profiles: make receiver param in `route_request` non-optional (linkerd/linkerd2-proxy#967) * outbound: move target types into stack modules (linkerd/linkerd2-proxy#971) * outbound: only build logical stacks for profiles with logical addrs (linkerd/linkerd2-proxy#972) * app: inbound: add fuzzer (linkerd/linkerd2-proxy#977) * admin: Fail connections when HTTP detection fails (linkerd/linkerd2-proxy#979) * reduce error boilerplate with `thiserror` (linkerd/linkerd2-proxy#980) * app: inbound: fuzzer: generalise fuzzers and solve coverage build (linkerd/linkerd2-proxy#978) * admin: Assume meshed connections are HTTP/2 (linkerd/linkerd2-proxy#982) * chore: Fix deprecations on nightly (linkerd/linkerd2-proxy#983) * fuzz: Add logging to fuzz targets (linkerd/linkerd2-proxy#985) * fuzz: don't panic when the proxy closes the conn (linkerd/linkerd2-proxy#986) * Commit lock files for fuzzers (linkerd/linkerd2-proxy#984) * fuzz: allow client requests to fail (linkerd/linkerd2-proxy#989) * tower: update dependency to 0.4.7 (linkerd/linkerd2-proxy#990) * outbound: Make the Endpoint::logical_addr field optional (linkerd/linkerd2-proxy#991) * trace: explicitly disable ANSI terminal colors (linkerd/linkerd2-proxy#994) * ingress: Require the l5d-dst-override header (linkerd/linkerd2-proxy#992) * outbound: Do not support TCP-forwarding in ingress-mode (linkerd/linkerd2-proxy#995) * Decouple tcp forward stack from Endpoint target (linkerd/linkerd2-proxy#996) * Pickup linkerd-await wrapper in docker build (linkerd/linkerd2-proxy#999) * docs: Add fuzzing report (linkerd/linkerd2-proxy#1000) * tests: Use io::Error in mocked connector (linkerd/linkerd2-proxy#1001) * outbound: Decouple endpoint & logical stacks (linkerd/linkerd2-proxy#1002)
Currently, the outbound proxy's
Logical
target type contains anOption<profiles::Receiver>
. When no profile is discovered, this fieldis set to
None
; the per-profile layers in the logical stack do nothingwhen this field is
None
. However, we still construct this wholestack for all destinations, even those without logical addresses.
This PR refactors the outbound proxy so that the
Logical
targetsalways have discovered profiles, and the logical stack is skipped when
no profile is discovered. Profile discovery still returns an
Option<profiles::Receiver>
, but we unwrap it in one place immediatelyafter the profile discovery layer, and skip directly to the endpoint
stack when it is
None
.This means that we can now avoid constructing layers like traffic split
etc when there is no profile for a target, rather than building a
traffic split that does nothing.
Note that some of the layers in the logical stacks currently do still
have code for handling the case where a
Logical
target has no profile.These layers currently take targets that are bounded with
Param<Option<profiles::Receiver>>
. TheLogical
type still has itsParam<Option<profiles::Receiver>>
impl, it just always returnsSome
now. I chose to leave this code as-is since this branch is already a
fairly large refactor. We can simplify a lot of this code in a follow-up
(although some layers may also require a similar change on the inbound
side).
Signed-off-by: Eliza Weisman eliza@buoyant.io