Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error running decompress_backfill: "permission denied for table bgw_job_stat" #28

Open
crstffr opened this issue Jul 20, 2022 · 6 comments

Comments

@crstffr
Copy link

crstffr commented Jul 20, 2022

I'm evaluating Timescale Cloud and am following the instructions on how to backfill data into compressed chunks, found here.

  1. I added all of the backfill extra procedures found here in backfill.sql
  2. I created a temp table called import_temp from the schema of an existing hypertable
  3. I populated the temp table with data I want to backfill
  4. I ran CALL decompress_backfill(staging_table=>'import_temp', destination_hypertable=>'mytable');

Result is the following error:

ERROR:  permission denied for table bgw_job_stat
CONTEXT:  SQL statement "SELECT next_start FROM _timescaledb_internal.bgw_job_stat WHERE job_id = compression_job_id FOR UPDATE"
PL/pgSQL function move_compression_job(integer,name,name,timestamp with time zone) line 19 at SQL statement
SQL statement "SELECT move_compression_job(hypertable_row.id, hypertable_row.schema_name, hypertable_row.table_name, now() + compression_job_push_interval)"
PL/pgSQL function decompress_backfill(regclass,regclass,text,boolean,interval,text[],boolean) line 45 at SQL statement

This type of backfilling will be common for our business case, and as such, this error prevents me from further evaluating Timescale.

Please let me know if I can provide you with further information, I would like to get this resolved quickly.

Thank you,
Chris

@xginn8
Copy link

xginn8 commented Sep 8, 2022

@crstffr I just ran into this issue as well, and was able to work around it by first manually removing the compression policy, and then rerunning the backfill job. manually stopping the compression job wasn't enough to backfill.

tsdb=> CALL decompress_backfill(staging_table=>'temp', destination_hypertable=>'db');
ERROR:  permission denied for table bgw_job_stat
...
tsdb=> select remove_compression_policy('db');
 remove_compression_policy 
---------------------------
 t
(1 row)
tsdb=> CALL decompress_backfill(staging_table=>'temp', destination_hypertable=>'db');
NOTICE:  Decompressing chunk: _timescaledb_internal._hyper_5_16_chunk

@dariogit-bit
Copy link

@xginn8 @crstffr Thank you for the workaround, this works smoothly. One question though,
Do we need to remove the compression policy every time before running decompress_backfill?
And does this means that we need to re-add the add_compression_policy every time after?
That seems a bit odd, since this is actually in the documentation, it feels like it should work better than that.

@Petwag
Copy link

Petwag commented Oct 7, 2022

Good evening,

Is there any 'official' news about that particular issue.

From the look of that page: https://docs.timescale.com/timescaledb/latest/how-to-guides/compression/backfill-historical-data/
I have the feeling that the compression policy should be taken care of by decompress_backfill.

I we have to manually disable and enable the compression, I might as well do the all process manually 😅

@mrenit
Copy link

mrenit commented Nov 18, 2022

Good morning,

I also have this issue. Is there any update on this issue.

See you

@svechinsky
Copy link

+1 for this

@ivanzamanov
Copy link

Thought I'd share a temporary workaround - here's a procedure to remove the compression policy and re-create it after.

CREATE OR REPLACE PROCEDURE decompress_backfill_wrapper(staging_table regclass,
    destination_hypertable regclass,
    on_conflict_action text DEFAULT 'NOTHING',
    delete_from_staging bool DEFAULT true,
    compression_job_push_interval interval DEFAULT '1 day',
    on_conflict_update_columns text[] DEFAULT '{}',
    skip_empty_ranges boolean DEFAULT false,
    on_conflict_target text DEFAULT '')
AS $proc$
DECLARE
    source text := staging_table::text;
    compress_after interval;

    dest_nspname name;
    dest_relname name;
BEGIN
    SELECT (get_schema_and_table_name(destination_hypertable)).* INTO STRICT dest_nspname, dest_relname;

    SELECT json_extract_path(config::json, 'compress_after')::text::interval INTO compress_after FROM timescaledb_information.jobs WHERE
        proc_name = 'policy_compression' AND
        hypertable_schema = dest_nspname AND
        hypertable_name  = dest_relname;

    PERFORM remove_compression_policy(destination_hypertable, if_exists => true);

    CALL decompress_backfill(staging_table, destination_hypertable, on_conflict_action, delete_from_staging,
    compression_job_push_interval, on_conflict_update_columns, skip_empty_ranges, on_conflict_target);

    IF compress_after IS NOT NULL THEN
        PERFORM add_compression_policy(destination_hypertable, compress_after);
    END IF;
END;

$proc$
LANGUAGE PLPGSQL;

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants