{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":84240850,"defaultBranch":"main","name":"timescaledb","ownerLogin":"timescale","currentUserCanPush":false,"isFork":false,"isEmpty":false,"createdAt":"2017-03-07T20:03:41.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/8986001?v=4","public":true,"private":false,"isOrgOwned":true},"refInfo":{"name":"","listCacheKey":"v0:1717748420.0","currentOid":""},"activityList":{"items":[{"before":"c2a85b1fb8556e828f111f4e1aa90ecfef55f133","after":"c978c69a94223b8218f6f9cc8988e68cf89a804f","ref":"refs/heads/main","pushedAt":"2024-06-12T10:22:12.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"akuzm","name":"Alexander Kuzmenkov","path":"/akuzm","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/36882414?s=80&v=4"},"commit":{"message":"Remove unneeded Sort over Sort (#6893)\n\nWe would add extra Sort nodes when adjusting the children of space\r\npartitioning MergeAppend under ChunkAppend. This is not needed because\r\nMergeAppend plans add the required Sort themselves, and in general no\r\nadjustment seems to be required for the MergeAppend children\r\nspecifically there.","shortMessageHtmlLink":"Remove unneeded Sort over Sort (#6893)"}},{"before":"be15ae68b130ecae25829810656441b217381d20","after":"c2a85b1fb8556e828f111f4e1aa90ecfef55f133","ref":"refs/heads/main","pushedAt":"2024-06-12T08:08:58.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"fabriziomello","name":"Fabrízio de Royes Mello","path":"/fabriziomello","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/612482?s=80&v=4"},"commit":{"message":"Reduce drop_extension test flakiness\n\nIn this regression test we check the `DROP SCHEMA public CASCADE;` for\nremoving the extension objects stored in that schema. The problem is if\nat the same moment we're running a background job it will lead to a\ndeadlock because the scheduler maintain information about each execution\nin the metadata tables that should be part of the cascade schema\nremoval.\n\nForced stop the background jobs and also terminate any running job.\n\nhttps://github.com/timescale/timescaledb/actions/runs/9424506293/job/25964852620","shortMessageHtmlLink":"Reduce drop_extension test flakiness"}},{"before":"54d5ba1b81e265c8183d19f7f6a6dfd72834a941","after":"be15ae68b130ecae25829810656441b217381d20","ref":"refs/heads/main","pushedAt":"2024-06-07T20:07:45.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"fabriziomello","name":"Fabrízio de Royes Mello","path":"/fabriziomello","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/612482?s=80&v=4"},"commit":{"message":"Post release 2.15.2","shortMessageHtmlLink":"Post release 2.15.2"}},{"before":"79e4438c90c04901bdaa70eaa2cf12bf3dc57932","after":"54d5ba1b81e265c8183d19f7f6a6dfd72834a941","ref":"refs/heads/main","pushedAt":"2024-06-07T14:16:11.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"fabriziomello","name":"Fabrízio de Royes Mello","path":"/fabriziomello","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/612482?s=80&v=4"},"commit":{"message":"Release 2.15.2\n\nThis release contains performance improvements and bug fixes since\nthe 2.15.0 release. Best practice is to upgrade at the next\navailable opportunity.\n\n**Migrating from self-hosted TimescaleDB v2.14.x and earlier**\n\nAfter you run `ALTER EXTENSION`, you must run [this SQL script](https://github.com/timescale/timescaledb-extras/blob/master/utils/2.15.X-fix_hypertable_foreign_keys.sql). For more details, see the following pull request [#6797](https://github.com/timescale/timescaledb/pull/6797).\n\nIf you are migrating from TimescaleDB v2.15.0 or v2.15.1, no changes are required.\n\n**Bugfixes**\n* #6975: Fix sort pushdown for partially compressed chunks.\n* #6976: Fix removal of metadata function and update script.\n* #6978: Fix segfault in compress_chunk with primary space partition.\n* #6993: Disallow hash partitioning on primary column.\n\n**Thanks**\n* @gugu for reporting the issue with catalog corruption due to update.\n* @srieding for reporting an issue with partially compressed chunks and ordering on joined columns.","shortMessageHtmlLink":"Release 2.15.2"}},{"before":"9812a457dabec86ba19ccce3b74d0f966ca3c9a2","after":"903847e4d69938b8bed8d1d13cb84053e243e262","ref":"refs/heads/2.15.x","pushedAt":"2024-06-07T08:12:17.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"pallavisontakke","name":"Pallavi Sontakke","path":"/pallavisontakke","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/13624181?s=80&v=4"},"commit":{"message":"Release 2.15.2 (#7014)\n\nThis release contains bug fixes since the\r\n2.15.1 release. Best practice is to upgrade at the next available\r\nopportunity.\r\n\r\n**Bugfixes**\r\n* #6975: Fix sort pushdown for partially compressed chunks.\r\n* #6976: Fix removal of metadata function and the update script.\r\n* #6978: Fix segfault in `compress_chunk` with primary space partition.\r\n* #6993: Disallow hash partitioning on the primary column.\r\n\r\n**Thanks**\r\n* @gugu for reporting the issue with catalog corruption due to update.\r\n* @srieding for reporting the issue with partially compressed chunks and\r\nordering on joined columns.","shortMessageHtmlLink":"Release 2.15.2 (#7014)"}},{"before":"2525b810ac937a915f7ac6698dfc72b592389623","after":"9812a457dabec86ba19ccce3b74d0f966ca3c9a2","ref":"refs/heads/coverity_scan","pushedAt":"2024-06-06T19:06:07.000Z","pushType":"push","commitsCount":12,"pusher":{"login":"fabriziomello","name":"Fabrízio de Royes Mello","path":"/fabriziomello","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/612482?s=80&v=4"},"commit":{"message":"Allow CAggs with variable sized bucked with origin/offset (#7005)\n\nOn 2.15.x we added complete support of CAggs using time bucket with\norigin and/or offset, but we restrict the creating when the bucked size\nis variable due to some uncertaing regarding monthly buckets.\n\nWhen bucketing by month we always align with the beginning of the month\neven defining an origin with day component. So to be consistent with the\ncurrent implementation we'll not change this behavior and allow it to be\nused in Continuous Aggregates.\n\nDisable-check: force-changelog-file\n\n(cherry picked from commit af8ca2dab0e3989b6fd12722894c5399215a84a7)","shortMessageHtmlLink":"Allow CAggs with variable sized bucked with origin/offset (#7005)"}},{"before":"6a8b31f866308362b76287da47473a13d279d4a6","after":"9812a457dabec86ba19ccce3b74d0f966ca3c9a2","ref":"refs/heads/prerelease_test","pushedAt":"2024-06-06T18:29:37.000Z","pushType":"push","commitsCount":11,"pusher":{"login":"fabriziomello","name":"Fabrízio de Royes Mello","path":"/fabriziomello","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/612482?s=80&v=4"},"commit":{"message":"Allow CAggs with variable sized bucked with origin/offset (#7005)\n\nOn 2.15.x we added complete support of CAggs using time bucket with\norigin and/or offset, but we restrict the creating when the bucked size\nis variable due to some uncertaing regarding monthly buckets.\n\nWhen bucketing by month we always align with the beginning of the month\neven defining an origin with day component. So to be consistent with the\ncurrent implementation we'll not change this behavior and allow it to be\nused in Continuous Aggregates.\n\nDisable-check: force-changelog-file\n\n(cherry picked from commit af8ca2dab0e3989b6fd12722894c5399215a84a7)","shortMessageHtmlLink":"Allow CAggs with variable sized bucked with origin/offset (#7005)"}},{"before":"c342181415d845eb5e51468ae7153e8f810dae6c","after":"9812a457dabec86ba19ccce3b74d0f966ca3c9a2","ref":"refs/heads/2.15.x","pushedAt":"2024-06-06T16:14:09.000Z","pushType":"pr_merge","commitsCount":2,"pusher":{"login":"fabriziomello","name":"Fabrízio de Royes Mello","path":"/fabriziomello","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/612482?s=80&v=4"},"commit":{"message":"Allow CAggs with variable sized bucked with origin/offset (#7005)\n\nOn 2.15.x we added complete support of CAggs using time bucket with\norigin and/or offset, but we restrict the creating when the bucked size\nis variable due to some uncertaing regarding monthly buckets.\n\nWhen bucketing by month we always align with the beginning of the month\neven defining an origin with day component. So to be consistent with the\ncurrent implementation we'll not change this behavior and allow it to be\nused in Continuous Aggregates.\n\nDisable-check: force-changelog-file\n\n(cherry picked from commit af8ca2dab0e3989b6fd12722894c5399215a84a7)","shortMessageHtmlLink":"Allow CAggs with variable sized bucked with origin/offset (#7005)"}},{"before":"d0789a0fe70ac13f4bf7de5c984d3342d43c86af","after":"79e4438c90c04901bdaa70eaa2cf12bf3dc57932","ref":"refs/heads/main","pushedAt":"2024-06-06T16:13:49.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"fabriziomello","name":"Fabrízio de Royes Mello","path":"/fabriziomello","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/612482?s=80&v=4"},"commit":{"message":"Add known flaky tests to linux32bit CI\n\nThe `bgw_db_scheduler*` are known to be a bit flaky and the results are\nignored in normal Linux/MacOS CI but currently Linux32bit don't use the\ngh_matrix_builder.py so those tests are not added by default.","shortMessageHtmlLink":"Add known flaky tests to linux32bit CI"}},{"before":"af8ca2dab0e3989b6fd12722894c5399215a84a7","after":"d0789a0fe70ac13f4bf7de5c984d3342d43c86af","ref":"refs/heads/main","pushedAt":"2024-06-06T14:33:25.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"fabriziomello","name":"Fabrízio de Royes Mello","path":"/fabriziomello","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/612482?s=80&v=4"},"commit":{"message":"Reorder some tests in bgw_custom\n\nMade some small tweaks in the testing order to actually do what the\ncomments are saying.","shortMessageHtmlLink":"Reorder some tests in bgw_custom"}},{"before":"396646123c64713c296821903ecbf8440c986def","after":null,"ref":"refs/heads/backport/2.15.x/6984","pushedAt":"2024-06-06T07:24:32.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"timescale-automation","name":"Eon","path":"/timescale-automation","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/123763385?s=80&v=4"}},{"before":"453b85fa0dec061f4e02d0de591ba2e5ed4b6eea","after":"c342181415d845eb5e51468ae7153e8f810dae6c","ref":"refs/heads/2.15.x","pushedAt":"2024-06-06T07:24:31.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"timescale-automation","name":"Eon","path":"/timescale-automation","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/123763385?s=80&v=4"},"commit":{"message":"Fix show_chunks on hypertable by hash\n\nUsing the new hypertable API creating closed dimentions when\npartitioning by hash was leading to a segfault on show_chunks.\n\nFixed it by checking for the closed primary dimension and erroring out\nin case of using parameters `older_than` and `newer_than`.\n\n(cherry picked from commit 4aff9b1748c406149a3afd473bbc59c0c7ecdd12)","shortMessageHtmlLink":"Fix show_chunks on hypertable by hash"}},{"before":null,"after":"396646123c64713c296821903ecbf8440c986def","ref":"refs/heads/backport/2.15.x/6984","pushedAt":"2024-06-06T06:22:34.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"github-actions[bot]","name":null,"path":"/apps/github-actions","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/15368?s=80&v=4"},"commit":{"message":"Fix show_chunks on hypertable by hash\n\nUsing the new hypertable API creating closed dimentions when\npartitioning by hash was leading to a segfault on show_chunks.\n\nFixed it by checking for the closed primary dimension and erroring out\nin case of using parameters `older_than` and `newer_than`.\n\n(cherry picked from commit 4aff9b1748c406149a3afd473bbc59c0c7ecdd12)","shortMessageHtmlLink":"Fix show_chunks on hypertable by hash"}},{"before":"4aff9b1748c406149a3afd473bbc59c0c7ecdd12","after":"af8ca2dab0e3989b6fd12722894c5399215a84a7","ref":"refs/heads/main","pushedAt":"2024-06-06T06:22:17.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"pallavisontakke","name":"Pallavi Sontakke","path":"/pallavisontakke","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/13624181?s=80&v=4"},"commit":{"message":"Allow CAggs with variable sized bucked with origin/offset (#7005)\n\nOn 2.15.x we added complete support of CAggs using time bucket with\r\norigin and/or offset, but we restrict the creating when the bucked size\r\nis variable due to some uncertaing regarding monthly buckets.\r\n\r\nWhen bucketing by month we always align with the beginning of the month\r\neven defining an origin with day component. So to be consistent with the\r\ncurrent implementation we'll not change this behavior and allow it to be\r\nused in Continuous Aggregates.\r\n\r\nDisable-check: force-changelog-file","shortMessageHtmlLink":"Allow CAggs with variable sized bucked with origin/offset (#7005)"}},{"before":"45befd52f98401f401594bdb7d94dc5d97042720","after":"4aff9b1748c406149a3afd473bbc59c0c7ecdd12","ref":"refs/heads/main","pushedAt":"2024-06-06T06:20:32.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"fabriziomello","name":"Fabrízio de Royes Mello","path":"/fabriziomello","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/612482?s=80&v=4"},"commit":{"message":"Fix show_chunks on hypertable by hash\n\nUsing the new hypertable API creating closed dimentions when\npartitioning by hash was leading to a segfault on show_chunks.\n\nFixed it by checking for the closed primary dimension and erroring out\nin case of using parameters `older_than` and `newer_than`.","shortMessageHtmlLink":"Fix show_chunks on hypertable by hash"}},{"before":"b15ace1475d9747ab32154a7dd569ff3f11ff6e2","after":null,"ref":"refs/heads/backport/2.15.x/7004","pushedAt":"2024-06-05T18:00:02.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"timescale-automation","name":"Eon","path":"/timescale-automation","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/123763385?s=80&v=4"}},{"before":"339dbc0ceca02d29ce01e52d12d5b3e5bb7acecb","after":"453b85fa0dec061f4e02d0de591ba2e5ed4b6eea","ref":"refs/heads/2.15.x","pushedAt":"2024-06-05T18:00:01.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"timescale-automation","name":"Eon","path":"/timescale-automation","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/123763385?s=80&v=4"},"commit":{"message":"Fix segfault with closed primary dimension\n\nWhile it is no longer possible to create this checking return value\nof hyperspace_get_open_dimension should still not be neglected.\n\n(cherry picked from commit 45befd52f98401f401594bdb7d94dc5d97042720)","shortMessageHtmlLink":"Fix segfault with closed primary dimension"}},{"before":null,"after":"b15ace1475d9747ab32154a7dd569ff3f11ff6e2","ref":"refs/heads/backport/2.15.x/7004","pushedAt":"2024-06-05T17:41:18.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"github-actions[bot]","name":null,"path":"/apps/github-actions","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/15368?s=80&v=4"},"commit":{"message":"Fix segfault with closed primary dimension\n\nWhile it is no longer possible to create this checking return value\nof hyperspace_get_open_dimension should still not be neglected.\n\n(cherry picked from commit 45befd52f98401f401594bdb7d94dc5d97042720)","shortMessageHtmlLink":"Fix segfault with closed primary dimension"}},{"before":"577cda4425067ea7387c7e361276911650686308","after":"45befd52f98401f401594bdb7d94dc5d97042720","ref":"refs/heads/main","pushedAt":"2024-06-05T17:39:07.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"svenklemm","name":"Sven Klemm","path":"/svenklemm","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/31455525?s=80&v=4"},"commit":{"message":"Fix segfault with closed primary dimension\n\nWhile it is no longer possible to create this checking return value\nof hyperspace_get_open_dimension should still not be neglected.","shortMessageHtmlLink":"Fix segfault with closed primary dimension"}},{"before":"aedc3bb1c3e386e48268639d479a517b5c338311","after":null,"ref":"refs/heads/backport/2.15.x/6981","pushedAt":"2024-06-05T14:51:18.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"timescale-automation","name":"Eon","path":"/timescale-automation","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/123763385?s=80&v=4"}},{"before":"f617ba65c7dbf23da51c748e561eb72b06371b27","after":"339dbc0ceca02d29ce01e52d12d5b3e5bb7acecb","ref":"refs/heads/2.15.x","pushedAt":"2024-06-05T14:51:17.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"timescale-automation","name":"Eon","path":"/timescale-automation","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/123763385?s=80&v=4"},"commit":{"message":"Fix segfault creating CAgg on hypertable by hash\n\nThe new hypertable API allows create it with primary space partition and\nCAggs doesn't support hypertables with custom partition functions.\n\nFixed the segfault by properly checking if there are an open dimension\navailable during the validation.\n\n(cherry picked from commit 577cda4425067ea7387c7e361276911650686308)","shortMessageHtmlLink":"Fix segfault creating CAgg on hypertable by hash"}},{"before":null,"after":"aedc3bb1c3e386e48268639d479a517b5c338311","ref":"refs/heads/backport/2.15.x/6981","pushedAt":"2024-06-05T14:15:19.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"github-actions[bot]","name":null,"path":"/apps/github-actions","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/15368?s=80&v=4"},"commit":{"message":"Fix segfault creating CAgg on hypertable by hash\n\nThe new hypertable API allows create it with primary space partition and\nCAggs doesn't support hypertables with custom partition functions.\n\nFixed the segfault by properly checking if there are an open dimension\navailable during the validation.\n\n(cherry picked from commit 577cda4425067ea7387c7e361276911650686308)","shortMessageHtmlLink":"Fix segfault creating CAgg on hypertable by hash"}},{"before":"bd0a3ccdb524a2cf8f34097886535d24a59d9a6a","after":"577cda4425067ea7387c7e361276911650686308","ref":"refs/heads/main","pushedAt":"2024-06-05T14:13:21.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"fabriziomello","name":"Fabrízio de Royes Mello","path":"/fabriziomello","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/612482?s=80&v=4"},"commit":{"message":"Fix segfault creating CAgg on hypertable by hash\n\nThe new hypertable API allows create it with primary space partition and\nCAggs doesn't support hypertables with custom partition functions.\n\nFixed the segfault by properly checking if there are an open dimension\navailable during the validation.","shortMessageHtmlLink":"Fix segfault creating CAgg on hypertable by hash"}},{"before":"5c2c80f84501934a8cc338906d06a1c4f9f24b96","after":"bd0a3ccdb524a2cf8f34097886535d24a59d9a6a","ref":"refs/heads/main","pushedAt":"2024-06-05T13:08:31.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"fabriziomello","name":"Fabrízio de Royes Mello","path":"/fabriziomello","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/612482?s=80&v=4"},"commit":{"message":"Fix broken CI build matrix\n\nWhen building the CI matrix we create a list of ignored tests that are\nknown to be flaky but the logic was wrong leading to an empty `IGNORES`\nvariable when running the regression.\n\nhttps://github.com/timescale/timescaledb/actions/runs/9310783691","shortMessageHtmlLink":"Fix broken CI build matrix"}},{"before":"713554cb8078be46e1062daa6e7a4ee11d417b0c","after":null,"ref":"refs/heads/backport/2.15.x/6992","pushedAt":"2024-06-05T13:01:47.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"antekresic","name":"Ante Kresic","path":"/antekresic","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1985166?s=80&v=4"}},{"before":"bbc396aebcc44cd026e3436ea621b69f7958fb80","after":"f617ba65c7dbf23da51c748e561eb72b06371b27","ref":"refs/heads/2.15.x","pushedAt":"2024-06-05T13:01:46.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"antekresic","name":"Ante Kresic","path":"/antekresic","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1985166?s=80&v=4"},"commit":{"message":"Fix sort pushdown for partially compressed chunks\n\nUsing all sort pathkeys with append node for partially compressed\nchunks ends up creating invalid plans when using joined columns\nin ORDER BY clause. Using the pathkey prefix that belongs to the\nunderlying relation fixes the issue and sets us up for possible\nincremental sort optimization.\n\n(cherry picked from commit eb548b60d1cc9bc5874cafdf2508a00694d6a44c)","shortMessageHtmlLink":"Fix sort pushdown for partially compressed chunks"}},{"before":"9c31ecf72451671c98b6df16f1920518fe25e0f9","after":"713554cb8078be46e1062daa6e7a4ee11d417b0c","ref":"refs/heads/backport/2.15.x/6992","pushedAt":"2024-06-05T12:36:18.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"antekresic","name":"Ante Kresic","path":"/antekresic","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1985166?s=80&v=4"},"commit":{"message":"Fix sort pushdown for partially compressed chunks\n\nUsing all sort pathkeys with append node for partially compressed\nchunks ends up creating invalid plans when using joined columns\nin ORDER BY clause. Using the pathkey prefix that belongs to the\nunderlying relation fixes the issue and sets us up for possible\nincremental sort optimization.\n\n(cherry picked from commit eb548b60d1cc9bc5874cafdf2508a00694d6a44c)","shortMessageHtmlLink":"Fix sort pushdown for partially compressed chunks"}},{"before":"44ee3d332b49b13e40fd23fa940dffdaba7e1975","after":null,"ref":"refs/heads/backport/2.15.x/6996","pushedAt":"2024-06-05T12:27:07.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"timescale-automation","name":"Eon","path":"/timescale-automation","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/123763385?s=80&v=4"}},{"before":"425b22ba71c8dd7cbf878f0e6fb05cf6be8f2d47","after":"bbc396aebcc44cd026e3436ea621b69f7958fb80","ref":"refs/heads/2.15.x","pushedAt":"2024-06-05T12:27:06.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"timescale-automation","name":"Eon","path":"/timescale-automation","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/123763385?s=80&v=4"},"commit":{"message":"Fix removal of metadata function and update script\n\nChanging the code to remove the assumption of 1:1\nmapping between chunks and chunk constraints. Including\na check if a chunk constraint is shared with another chunk.\nIn that case, skip deleting the dimension slice.\n\n(cherry picked from commit 5c2c80f84501934a8cc338906d06a1c4f9f24b96)","shortMessageHtmlLink":"Fix removal of metadata function and update script"}},{"before":"b3ced3977d4e53e8c9c6655000cbdab9e341e28e","after":"9c31ecf72451671c98b6df16f1920518fe25e0f9","ref":"refs/heads/backport/2.15.x/6992","pushedAt":"2024-06-05T12:11:07.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"antekresic","name":"Ante Kresic","path":"/antekresic","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1985166?s=80&v=4"},"commit":{"message":"Fix sort pushdown for partially compressed chunks\n\nUsing all sort pathkeys with append node for partially compressed\nchunks ends up creating invalid plans when using joined columns\nin ORDER BY clause. Using the pathkey prefix that belongs to the\nunderlying relation fixes the issue and sets us up for possible\nincremental sort optimization.\n\n(cherry picked from commit eb548b60d1cc9bc5874cafdf2508a00694d6a44c)","shortMessageHtmlLink":"Fix sort pushdown for partially compressed chunks"}}],"hasNextPage":true,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"djE6ks8AAAAEYyrQCQA","startCursor":null,"endCursor":null}},"title":"Activity · timescale/timescaledb"}