Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for remote write relabelling. #2047

Merged
merged 1 commit into from
Oct 5, 2016
Merged

Conversation

brian-brazil
Copy link
Contributor

Switch back to a single remote writer, as we were only ever meant to
have one and the relabel semantics are clearer that way.

Fixed #1955

@juliusv

@@ -40,33 +41,32 @@ func (s *ReloadableStorage) ApplyConfig(conf *config.Config) error {
s.mtx.Lock()
defer s.mtx.Unlock()

// TODO: we should only stop & recreate queues which have changes,
// as this can be quite disruptive.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This TODO is still valid (in modified wording) even after your changes.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@juliusv
Copy link
Member

juliusv commented Oct 4, 2016

👍 besides comment.

Switch back to a single remote writer, as we were only ever meant to
have one and the relabel semantics are clearer that way.
@brian-brazil brian-brazil merged commit 6e8f87a into master Oct 5, 2016
@brian-brazil brian-brazil deleted the write-relabel branch October 5, 2016 06:47
@tomwilkie
Copy link
Member

Switch back to a single remote writer, as we were only ever meant to have one

Have only just spotted this! I had intended this to be a list, so you could send the samples to multiple downstream services - for instance, we want to allow people to send samples to two cluster (for HA), and we want to be able to send samples from our dev to our dev & prod cluster a vice versa.

The alternative (for us) is to run multiple Prometheus instances, one pushing to each cluster. Problem with this is managing the config (as k8s config maps) - we're ending up with multiple copies of the same config and its a pain to ensure they're in sync.

Would you be open to me revising this? Alternatively, would you be open to Prometheus loading multiple configs and defining some kind of config merge?

@brancz
Copy link
Member

brancz commented Jan 10, 2017

Why don't you mount the same ConfigMap instead of syncing multiple if they are the same?

@tomwilkie
Copy link
Member

Why don't you mount the same ConfigMap instead of syncing multiple if they are the same?

They need to differ very slightly (the remote storage url & credentials). And I'm not aware of a way of doing this with k8s other than a blind copy.

@brian-brazil
Copy link
Contributor Author

Right now the answer is to have a receiver acting as a tee. Anything beyond that needs to wait at the least until we've removed the legacy remote storage systems.

@brancz
Copy link
Member

brancz commented Jan 10, 2017

Ok that makes sense. IIRC there is a proposal out there for templating in kubernetes but I don't remember the scope of it and am also not able to find it right now.

@tomwilkie
Copy link
Member

Right now the answer is to have a receiver acting as a tee. Anything beyond that needs to wait at the least until we've removed the legacy remote storage systems.

Hmmm, thats a bit gross (having a tee). Why does this need to wait on removing the old legacy remote storage? What am I missing?

@brian-brazil
Copy link
Contributor Author

Hmmm, thats a bit gross (having a tee).

The consensus when this was first discussed was that there was to be only one.

Why does this need to wait on removing the old legacy remote storage? What am I missing?

Think about how the config file would have to look without removing that.

@tomwilkie
Copy link
Member

The consensus when this was first discussed was that there was to be only one.

I don't specifically recall this - can you remind me about the tradeoff behind this decision?

Either way I don't think we should hold it as set in stone - we've modified previous decisions taken during that discussion (cf gRPC), and if we have good reasons to reevaluate it we should.

Think about how the config file would have to look without removing that.

The original PR adding generic remote storage to the config file (#1957) had multiple and it looked fine to me... what do you see as the problem?

@brian-brazil
Copy link
Contributor Author

I don't specifically recall this - can you remind me about the tradeoff behind this decision?

It was in the big long initial thread about adding this support.

The original PR adding generic remote storage to the config file (#1957) had multiple and it looked fine to me... what do you see as the problem?

The semantics of relabelling there had not been considered, as having multiple hadn't been on the table.

@tomwilkie
Copy link
Member

The semantics of relabelling there had not been considered, as having multiple hadn't been on the table.

The relabelling config is part of the RemoteWriteConfig struct, as can be repeated too. Whats the problem with this?

@brian-brazil
Copy link
Contributor Author

The relabelling config is part of the RemoteWriteConfig struct, as can be repeated too. Whats the problem with this?

How would that work for legacy remote storage?

Do we want the relabelling to be global or per receiver?

As I indicated back on the commit where this was added, by adding support for multiple receivers you completely blindsided me (as this support was looking to be ~2 years out) so none of this stuff had been figured out in advance and was blocking more important work. This stuff still hasn't been figured out.

@juliusv
Copy link
Member

juliusv commented Jan 19, 2017

I agree that multiple remote writers would be useful for Cortex, but also for any other case where you want to send data to multiple systems (like to a long-term storage and a stream processor at the same time). Running yet another process outside of Prometheus for teeing gets a bit ridiculous, given that you already have to run external adapter processes to convert Prometheus's remote write format to whatever system you're sending samples to.

How would that work for legacy remote storage?

Given that legacy remote storage is deprecated and soon-to-be removed, I don't think we need to support multiple of them anymore (that would also require moving them from flags to the config).

Do we want the relabelling to be global or per receiver?

Per receiver seems to make most sense, given that you may want to send different subsets of samples to different targets. The downside is having to repeat common relabelings, but that's better than not being able to send different subsets, IMO.

If we can agree on this, I think it makes sense to re-add support for multiple remote writers.

@juliusv
Copy link
Member

juliusv commented Jan 31, 2017

Since there's no reply here yet, I'm going to work on a PR that re-adds multiple remote writers, so we can discuss further there.

@brian-brazil
Copy link
Contributor Author

We need to kill off the legacy remote storage before that can be considered, otherwise we'd be making more breaking changes than is necessary.

@juliusv
Copy link
Member

juliusv commented Jan 31, 2017

@brian-brazil I think now I finally understand. I wrongly assumed that the remote write relabeling added here only applied to the new generic remote writes, not all the legacy ones. That would have made intuitive sense for me, since its part of the generic remote write configuration section.

Ok, I would love to get rid of the legacy ones (and provide external reference implemenations for them if needed), but will check the priority of that with my Weavy colleagues.

@juliusv
Copy link
Member

juliusv commented Jan 31, 2017

External legacy example support out in #2381

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants