{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":484100805,"defaultBranch":"main","name":"aws-eda-slurm-cluster","ownerLogin":"aws-samples","currentUserCanPush":false,"isFork":false,"isEmpty":false,"createdAt":"2022-04-21T15:14:19.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/8931462?v=4","public":true,"private":false,"isOrgOwned":true},"refInfo":{"name":"","listCacheKey":"v0:1717798934.0","currentOid":""},"activityList":{"items":[{"before":"7255024c504b154e64591b54a2e134321fe11cf0","after":"109844e2b4f24581dad218fe759d7301a9fc5a61","ref":"refs/heads/236-feature-add-support-for-parallelcluster-392","pushedAt":"2024-06-07T22:37:10.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"cartalla","name":"Allan Carter","path":"/cartalla","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/24233076?s=80&v=4"},"commit":{"message":"Support ParallelCluster 3.9.2\n\nResolves #236","shortMessageHtmlLink":"Support ParallelCluster 3.9.2"}},{"before":null,"after":"7255024c504b154e64591b54a2e134321fe11cf0","ref":"refs/heads/236-feature-add-support-for-parallelcluster-392","pushedAt":"2024-06-07T22:22:14.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"cartalla","name":"Allan Carter","path":"/cartalla","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/24233076?s=80&v=4"},"commit":{"message":"Do not auto-prune instance types if there are too many (#235)\n\nI was previously only allowing 1 memory size/core count combination to keep\r\nthe number of compute resources down and also was combining multiple instance\r\ntypes in one compute resource if possible.\r\nThis was to try to maximize the number of instance types that were configured.\r\n\r\nThis led to people not being able to configure the exact instance types they\r\nwanted.\r\nThe preference is to notify the user and let them choose which instances types\r\nto exclude or to reduce the number of included types.\r\n\r\nSo, I've reverted to my original strategy of 1 instance type per compute resource and 1 CR per queue.\r\nThe compute resources can be combined into any queues that the user wants using\r\ncustom slurm settings.\r\n\r\nI had to exclude instance types in the default configuration in order to keep from exceeding the PC limits.\r\n\r\nResolves #220\r\n\r\nUpdate ParallelCluster version in config files and docs.\r\n\r\nClean up security scan.","shortMessageHtmlLink":"Do not auto-prune instance types if there are too many (#235)"}},{"before":"bf75217889fe86cadfff31175b5a06ce02c62725","after":"c89b5ccf928b2cc41c1e01f44cefa4ed3eff6df2","ref":"refs/heads/gh-pages","pushedAt":"2024-05-23T22:37:33.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"cartalla","name":"Allan Carter","path":"/cartalla","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/24233076?s=80&v=4"},"commit":{"message":"Deployed 7255024 with MkDocs version: 1.5.3","shortMessageHtmlLink":"Deployed 7255024 with MkDocs version: 1.5.3"}},{"before":"bb9c7523e3d5c569d3d12defc8e59e32e5bbe005","after":null,"ref":"refs/heads/220-reducing-number-of-compute-resources-to-aggressively","pushedAt":"2024-05-23T22:34:44.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"cartalla","name":"Allan Carter","path":"/cartalla","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/24233076?s=80&v=4"}},{"before":"70fd1ef63c1b92fd2c2c8a4774599e5dfd678a9c","after":"7255024c504b154e64591b54a2e134321fe11cf0","ref":"refs/heads/main","pushedAt":"2024-05-23T22:34:43.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"cartalla","name":"Allan Carter","path":"/cartalla","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/24233076?s=80&v=4"},"commit":{"message":"Do not auto-prune instance types if there are too many (#235)\n\nI was previously only allowing 1 memory size/core count combination to keep\r\nthe number of compute resources down and also was combining multiple instance\r\ntypes in one compute resource if possible.\r\nThis was to try to maximize the number of instance types that were configured.\r\n\r\nThis led to people not being able to configure the exact instance types they\r\nwanted.\r\nThe preference is to notify the user and let them choose which instances types\r\nto exclude or to reduce the number of included types.\r\n\r\nSo, I've reverted to my original strategy of 1 instance type per compute resource and 1 CR per queue.\r\nThe compute resources can be combined into any queues that the user wants using\r\ncustom slurm settings.\r\n\r\nI had to exclude instance types in the default configuration in order to keep from exceeding the PC limits.\r\n\r\nResolves #220\r\n\r\nUpdate ParallelCluster version in config files and docs.\r\n\r\nClean up security scan.","shortMessageHtmlLink":"Do not auto-prune instance types if there are too many (#235)"}},{"before":"d33b6dfe85f240534f1a4f7f551e94e429d08045","after":"bb9c7523e3d5c569d3d12defc8e59e32e5bbe005","ref":"refs/heads/220-reducing-number-of-compute-resources-to-aggressively","pushedAt":"2024-05-23T21:06:03.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"cartalla","name":"Allan Carter","path":"/cartalla","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/24233076?s=80&v=4"},"commit":{"message":"Do not auto-prune instance types if there are too many\n\nI was previously only allowing 1 memory size/core count combination to keep\nthe number of compute resources down and also was combining multiple instance\ntypes in one compute resource if possible.\nThis was to try to maximize the number of instance types that were configured.\n\nThis led to people not being able to configure the exact instance types they\nwanted.\nThe preference is to notify the user and let them choose which instances types\nto exclude or to reduce the number of included types.\n\nSo, I've reverted to my original strategy of 1 instance type per compute resource and 1 CR per queue.\nThe compute resources can be combined into any queues that the user wants using\ncustom slurm settings.\n\nI had to exclude instance types in the default configuration in order to keep from exceeding the PC limits.\n\nResolves #220\n\nUpdate ParallelCluster version in config files and docs.\n\nClean up security scan.","shortMessageHtmlLink":"Do not auto-prune instance types if there are too many"}},{"before":"1c45cc711ccb2ee199185061c52a5b2f6149cbe9","after":"d33b6dfe85f240534f1a4f7f551e94e429d08045","ref":"refs/heads/220-reducing-number-of-compute-resources-to-aggressively","pushedAt":"2024-05-23T18:17:26.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"cartalla","name":"Allan Carter","path":"/cartalla","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/24233076?s=80&v=4"},"commit":{"message":"Do not auto-prune instance types if there are too many\n\nI was previously only allowing 1 memory size/core count combination to keep\nthe number of compute resources down and also was combining multiple instance\ntypes in one compute resource if possible.\nThis was to try to maximize the number of instance types that were configured.\n\nThis led to people not being able to configure the exact instance types they\nwanted.\nThe preference is to notify the user and let them choose which instances types\nto exclude or to reduce the number of included types.\n\nSo, I've reverted to my original strategy of 1 instance type per compute resource and 1 CR per queue.\nThe compute resources can be combined into any queues that the user wants using\ncustom slurm settings.\n\nI had to exclude instance types in the default configuration in order to keep from exceeding the PC limits.\n\nResolves #220\n\nUpdate ParallelCluster version in config files and docs.","shortMessageHtmlLink":"Do not auto-prune instance types if there are too many"}},{"before":"8869d9e52fa6a5fa2c5672beaf67a9c75644cd2c","after":"bf75217889fe86cadfff31175b5a06ce02c62725","ref":"refs/heads/gh-pages","pushedAt":"2024-05-21T18:49:13.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"cartalla","name":"Allan Carter","path":"/cartalla","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/24233076?s=80&v=4"},"commit":{"message":"Deployed 1c45cc7 with MkDocs version: 1.5.3","shortMessageHtmlLink":"Deployed 1c45cc7 with MkDocs version: 1.5.3"}},{"before":"437dbc93d139a5de3074c74ffd373e58549bff3c","after":"1c45cc711ccb2ee199185061c52a5b2f6149cbe9","ref":"refs/heads/220-reducing-number-of-compute-resources-to-aggressively","pushedAt":"2024-05-21T18:40:37.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"cartalla","name":"Allan Carter","path":"/cartalla","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/24233076?s=80&v=4"},"commit":{"message":"Do not auto-prune instance types if there are too many\n\nI was previously only allowing 1 memory size/core count combination to keep\nthe number of compute resources down and also was combining multiple instance\ntypes in one compute resource if possible.\nThis was to try to maximize the number of instance types that were configured.\n\nThis led to people not being able to configure the exact instance types they\nwanted.\nThe preference is to notify the user and let them choose which instances types\nto exclude or to reduce the number of included types.\n\nSo, I've reverted to my original strategy of 1 instance type per compute resource and 1 CR per queue.\nThe compute resources can be combined into any queues that the user wants using\ncustom slurm settings.\n\nI had to exclude instance types in the default configuration in order to keep from exceeding the PC limits.\n\nResolves #220","shortMessageHtmlLink":"Do not auto-prune instance types if there are too many"}},{"before":null,"after":"70fd1ef63c1b92fd2c2c8a4774599e5dfd678a9c","ref":"refs/heads/233-feature-install-slurm-utilities-in-nfs-area-so-all-machines-can-see-it-no-need-to-recompile-for-every-machine","pushedAt":"2024-05-17T18:30:16.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"cartalla","name":"Allan Carter","path":"/cartalla","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/24233076?s=80&v=4"},"commit":{"message":"Update deployment docs (#234)\n\nClarify and correct the docs.\r\n\r\nResolves #222","shortMessageHtmlLink":"Update deployment docs (#234)"}},{"before":"8921d891dd6edeb9bd4873c6ca5d4e8a43af6a81","after":"437dbc93d139a5de3074c74ffd373e58549bff3c","ref":"refs/heads/220-reducing-number-of-compute-resources-to-aggressively","pushedAt":"2024-05-17T18:25:16.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"cartalla","name":"Allan Carter","path":"/cartalla","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/24233076?s=80&v=4"},"commit":{"message":"Do not auto-prune instance types if there are too many\n\nI was previously only allowing 1 memory size/core count combination to keep\nthe number of compute resources down and also was combining multiple instance\ntypes in one compute resource if possible.\n\nThis led to people no being able to configure the exact instance types they\nwanted.\n\nSo, I've reverted to my original strategy of 1 instance type per compute resource and 1 CR per queue.\nThe compute resources can be combined into any queues that the user wants using\ncustom slurm settings.\n\nI had to exclude instance types in the default configuration in order to keep from exceeding the PC limits.\n\nResolves #220","shortMessageHtmlLink":"Do not auto-prune instance types if there are too many"}},{"before":"90ae28470f1b4f5b89c8a134d3b383d1d0cf526e","after":"8921d891dd6edeb9bd4873c6ca5d4e8a43af6a81","ref":"refs/heads/220-reducing-number-of-compute-resources-to-aggressively","pushedAt":"2024-05-17T17:41:29.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"cartalla","name":"Allan Carter","path":"/cartalla","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/24233076?s=80&v=4"},"commit":{"message":"Do not auto-prune instance types if there are too many\n\nI was previously only allowing 1 memory size/core count combination to keep\nthe number of compute resources down and also was combining multiple instance\ntypes in one compute resource if possible.\n\nThis led to people no being able to configure the exact instance types they\nwanted.\n\nSo, I've reverted to my original strategy of 1 instance type per compute resource and 1 CR per queue.\nThe compute resources can be combined into any queues that the user wants using\ncustom slurm settings.\n\nI had to exclude instance types in the default configuration in order to keep from exceeding the PC limits.\n\nResolves #220","shortMessageHtmlLink":"Do not auto-prune instance types if there are too many"}},{"before":"8dff7cd021eade93eb40d74c280536b43535cf52","after":"90ae28470f1b4f5b89c8a134d3b383d1d0cf526e","ref":"refs/heads/220-reducing-number-of-compute-resources-to-aggressively","pushedAt":"2024-05-17T17:26:40.000Z","pushType":"push","commitsCount":2,"pusher":{"login":"cartalla","name":"Allan Carter","path":"/cartalla","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/24233076?s=80&v=4"},"commit":{"message":"Do not auto-prune instance types if there are too many\n\nI was previously only allowing 1 memory size/core count combination to keep\nthe number of compute resources down and also was combining multiple instance\ntypes in one compute resource if possible.\n\nThis led to people no being able to configure the exact instance types they\nwanted.\n\nSo, I've reverted to my original strategy of 1 instance type per compute resource and 1 CR per queue.\nThe compute resources can be combined into any queues that the user wants using\ncustom slurm settings.\n\nI had to exclude instance types in the default configuration in order to keep from exceeding the PC limits.\n\nResolves #220","shortMessageHtmlLink":"Do not auto-prune instance types if there are too many"}},{"before":"ef67eb9570e14fcf5995db38a3245a3843558e5d","after":null,"ref":"refs/heads/222-documentation-corrections-required-on-deploy-parallel-cluster-documentation-page","pushedAt":"2024-05-15T18:43:24.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"cartalla","name":"Allan Carter","path":"/cartalla","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/24233076?s=80&v=4"}},{"before":"8dff7cd021eade93eb40d74c280536b43535cf52","after":"70fd1ef63c1b92fd2c2c8a4774599e5dfd678a9c","ref":"refs/heads/main","pushedAt":"2024-05-15T18:43:23.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"cartalla","name":"Allan Carter","path":"/cartalla","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/24233076?s=80&v=4"},"commit":{"message":"Update deployment docs (#234)\n\nClarify and correct the docs.\r\n\r\nResolves #222","shortMessageHtmlLink":"Update deployment docs (#234)"}},{"before":"9ba350f3934a7eaebe80184e71d3812b17779495","after":"8869d9e52fa6a5fa2c5672beaf67a9c75644cd2c","ref":"refs/heads/gh-pages","pushedAt":"2024-05-15T18:42:22.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"cartalla","name":"Allan Carter","path":"/cartalla","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/24233076?s=80&v=4"},"commit":{"message":"Deployed ef67eb9 with MkDocs version: 1.5.3","shortMessageHtmlLink":"Deployed ef67eb9 with MkDocs version: 1.5.3"}},{"before":"54b6a9eae90b8d79b24ddc8268d8ea028288f040","after":"ef67eb9570e14fcf5995db38a3245a3843558e5d","ref":"refs/heads/222-documentation-corrections-required-on-deploy-parallel-cluster-documentation-page","pushedAt":"2024-05-15T18:42:14.000Z","pushType":"push","commitsCount":4,"pusher":{"login":"cartalla","name":"Allan Carter","path":"/cartalla","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/24233076?s=80&v=4"},"commit":{"message":"Update deployment docs\n\nClarify and correct the docs.\n\nResolves #222","shortMessageHtmlLink":"Update deployment docs"}},{"before":"ded618c956734f64ec26b118433bc0dc6361d2df","after":"8dff7cd021eade93eb40d74c280536b43535cf52","ref":"refs/heads/220-reducing-number-of-compute-resources-to-aggressively","pushedAt":"2024-05-13T23:53:04.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"cartalla","name":"Allan Carter","path":"/cartalla","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/24233076?s=80&v=4"},"commit":{"message":"Add support for ParallelCluster versions 3.9.0 and 3.9.1 (#232)\n\nAdd support for rhel9 and rocky9.\r\nHad to update some of the ansible playbooks to mimic rhel8 changes.\r\n\r\nResolves #229\r\n\r\nSet SubmitterInstanceTags based on RESEnvironmentName.\r\n\r\nRemove SubmitterSecurityGroupIds parameter.\r\nThis option added rules to existing security groups and if they were used by multiple clusters then the number of security group rules would exceed the maximum allowed.\r\nWith the addition of adding security groups to the head and compute nodes the\r\ncustomer should supply their own security groups that meet the slurm cluster requirements, attach them to their login nodes and configure them as additional security groups for the head and compute nodes.\r\n\r\nResolves #204\r\n\r\nUpdate CallSlurmRestApiLambda from Python 3.8 to 3.9.\r\n\r\nResolves #230\r\n\r\nUpdate CDK version to 2.111.0.\r\nThis is the latest version supported by nodejs 16.\r\nReally need to move to nodejs 20, but it isn't supported on Amazon Linux 2 or\r\nRHEL 7 family.\r\nWould require either running in a Docker container or on a newer OS version.\r\nI think that I'm going to change the prerequisites for the OS distribution\r\nso that I can stay on the latest tools.\r\nFor example, I can't update to Python 3.12 until I do this.\r\n\r\nUpdate DeconfigureRESUsersGroupsJson to pass if last statement fails.\r\n\r\nFix bug in create_slurm_accounts.py\r\n\r\nResolves #231","shortMessageHtmlLink":"Add support for ParallelCluster versions 3.9.0 and 3.9.1 (#232)"}},{"before":"52a307b247e926b584850e05201a2fec22021865","after":"9ba350f3934a7eaebe80184e71d3812b17779495","ref":"refs/heads/gh-pages","pushedAt":"2024-05-13T23:49:03.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"cartalla","name":"Allan Carter","path":"/cartalla","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/24233076?s=80&v=4"},"commit":{"message":"Deployed 72edcdf with MkDocs version: 1.5.3","shortMessageHtmlLink":"Deployed 72edcdf with MkDocs version: 1.5.3"}},{"before":"72edcdf0af8a7aaa69dbef29065ee41bd08b9932","after":null,"ref":"refs/heads/229-feature-add-support-for-parallelcluster-version-390-and-391","pushedAt":"2024-05-13T23:48:21.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"cartalla","name":"Allan Carter","path":"/cartalla","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/24233076?s=80&v=4"}},{"before":"ded618c956734f64ec26b118433bc0dc6361d2df","after":"8dff7cd021eade93eb40d74c280536b43535cf52","ref":"refs/heads/main","pushedAt":"2024-05-13T23:48:21.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"cartalla","name":"Allan Carter","path":"/cartalla","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/24233076?s=80&v=4"},"commit":{"message":"Add support for ParallelCluster versions 3.9.0 and 3.9.1 (#232)\n\nAdd support for rhel9 and rocky9.\r\nHad to update some of the ansible playbooks to mimic rhel8 changes.\r\n\r\nResolves #229\r\n\r\nSet SubmitterInstanceTags based on RESEnvironmentName.\r\n\r\nRemove SubmitterSecurityGroupIds parameter.\r\nThis option added rules to existing security groups and if they were used by multiple clusters then the number of security group rules would exceed the maximum allowed.\r\nWith the addition of adding security groups to the head and compute nodes the\r\ncustomer should supply their own security groups that meet the slurm cluster requirements, attach them to their login nodes and configure them as additional security groups for the head and compute nodes.\r\n\r\nResolves #204\r\n\r\nUpdate CallSlurmRestApiLambda from Python 3.8 to 3.9.\r\n\r\nResolves #230\r\n\r\nUpdate CDK version to 2.111.0.\r\nThis is the latest version supported by nodejs 16.\r\nReally need to move to nodejs 20, but it isn't supported on Amazon Linux 2 or\r\nRHEL 7 family.\r\nWould require either running in a Docker container or on a newer OS version.\r\nI think that I'm going to change the prerequisites for the OS distribution\r\nso that I can stay on the latest tools.\r\nFor example, I can't update to Python 3.12 until I do this.\r\n\r\nUpdate DeconfigureRESUsersGroupsJson to pass if last statement fails.\r\n\r\nFix bug in create_slurm_accounts.py\r\n\r\nResolves #231","shortMessageHtmlLink":"Add support for ParallelCluster versions 3.9.0 and 3.9.1 (#232)"}},{"before":"40cfafd5c1854281b55a9334b15f5afa3d5cd935","after":"72edcdf0af8a7aaa69dbef29065ee41bd08b9932","ref":"refs/heads/229-feature-add-support-for-parallelcluster-version-390-and-391","pushedAt":"2024-05-13T23:45:19.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"cartalla","name":"Allan Carter","path":"/cartalla","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/24233076?s=80&v=4"},"commit":{"message":"Add support for ParallelCluster versions 3.9.0 and 3.9.1\n\nAdd support for rhel9 and rocky9.\nHad to update some of the ansible playbooks to mimic rhel8 changes.\n\nResolves #229\n\nSet SubmitterInstanceTags based on RESEnvironmentName.\n\nRemove SubmitterSecurityGroupIds parameter.\nThis option added rules to existing security groups and if they were used by multiple clusters then the number of security group rules would exceed the maximum allowed.\nWith the addition of adding security groups to the head and compute nodes the\ncustomer should supply their own security groups that meet the slurm cluster requirements, attach them to their login nodes and configure them as additional security groups for the head and compute nodes.\n\nResolves #204\n\nUpdate CallSlurmRestApiLambda from Python 3.8 to 3.9.\n\nResolves #230\n\nUpdate CDK version to 2.111.0.\nThis is the latest version supported by nodejs 16.\nReally need to move to nodejs 20, but it isn't supported on Amazon Linux 2 or\nRHEL 7 family.\nWould require either running in a Docker container or on a newer OS version.\nI think that I'm going to change the prerequisites for the OS distribution\nso that I can stay on the latest tools.\nFor example, I can't update to Python 3.12 until I do this.\n\nUpdate DeconfigureRESUsersGroupsJson to pass if last statement fails.\n\nFix bug in create_slurm_accounts.py\n\nResolves #231","shortMessageHtmlLink":"Add support for ParallelCluster versions 3.9.0 and 3.9.1"}},{"before":"c999cd6cf1d5f5081f0b6ffa842cef27164b16dc","after":"52a307b247e926b584850e05201a2fec22021865","ref":"refs/heads/gh-pages","pushedAt":"2024-05-13T23:38:17.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"cartalla","name":"Allan Carter","path":"/cartalla","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/24233076?s=80&v=4"},"commit":{"message":"Deployed 40cfafd with MkDocs version: 1.5.3","shortMessageHtmlLink":"Deployed 40cfafd with MkDocs version: 1.5.3"}},{"before":"b8ddc476bd1a0106ca60e43fd3a255029663468e","after":"40cfafd5c1854281b55a9334b15f5afa3d5cd935","ref":"refs/heads/229-feature-add-support-for-parallelcluster-version-390-and-391","pushedAt":"2024-05-13T23:38:12.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"cartalla","name":"Allan Carter","path":"/cartalla","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/24233076?s=80&v=4"},"commit":{"message":"Add support for ParallelCluster versions 3.9.0 and 3.9.1\n\nAdd support for rhel9 and rocky9.\nHad to update some of the ansible playbooks to mimic rhel8 changes.\n\nResolves #229\n\nSet SubmitterInstanceTags based on RESEnvironmentName.\n\nRemove SubmitterSecurityGroupIds parameter.\nThis option added rules to existing security groups and if they were used by multiple clusters then the number of security group rules would exceed the maximum allowed.\nWith the addition of adding security groups to the head and compute nodes the\ncustomer should supply their own security groups that meet the slurm cluster requirements, attach them to their login nodes and configure them as additional security groups for the head and compute nodes.\n\nResolves #204\n\nUpdate CallSlurmRestApiLambda from Python 3.8 to 3.9.\n\nResolves #230\n\nUpdate CDK version to 2.111.0.\nThis is the latest version supported by nodejs 16.\nReally need to move to nodejs 20, but it isn't supported on Amazon Linux 2 or\nRHEL 7 family.\nWould require either running in a Docker container or on a newer OS version.\nI think that I'm going to change the prerequisites for the OS distribution\nso that I can stay on the latest tools.\nFor example, I can't update to Python 3.12 until I do this.\n\nUpdate DeconfigureRESUsersGroupsJson to pass if last statement fails.\n\nFix bug in create_slurm_accounts.py\n\nResolves #231","shortMessageHtmlLink":"Add support for ParallelCluster versions 3.9.0 and 3.9.1"}},{"before":"000b451bc353bb188cf003a1b4b4120718e191b1","after":"c999cd6cf1d5f5081f0b6ffa842cef27164b16dc","ref":"refs/heads/gh-pages","pushedAt":"2024-05-13T22:39:01.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"cartalla","name":"Allan Carter","path":"/cartalla","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/24233076?s=80&v=4"},"commit":{"message":"Deployed b8ddc47 with MkDocs version: 1.5.3","shortMessageHtmlLink":"Deployed b8ddc47 with MkDocs version: 1.5.3"}},{"before":"9db951366c30fd04eba3c0388a393c36ffb6d3c6","after":"b8ddc476bd1a0106ca60e43fd3a255029663468e","ref":"refs/heads/229-feature-add-support-for-parallelcluster-version-390-and-391","pushedAt":"2024-05-13T20:47:20.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"cartalla","name":"Allan Carter","path":"/cartalla","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/24233076?s=80&v=4"},"commit":{"message":"Add support for ParallelCluster versions 3.9.0 and 3.9.1\n\nAdd support for rhel9 and rocky9.\nHad to update some of the ansible playbooks to mimic rhel8 changes.\n\nResolves #229\n\nSet SubmitterInstanceTags based on RESEnvironmentName.\n\nRemove SubmitterSecurityGroupIds parameter.\nThis option added rules to existing security groups and if they were used by multiple clusters then the number of security group rules would exceed the maximum allowed.\nWith the addition of adding security groups to the head and compute nodes the\ncustomer should supply their own security groups that meet the slurm cluster requirements, attach them to their login nodes and configure them as additional security groups for the head and compute nodes.\n\nResolves #204\n\nUpdate CallSlurmRestApiLambda from Python 3.8 to 3.9.\n\nResolves #230\n\nUpdate CDK version to 2.111.0.\nThis is the latest version supported by nodejs 16.\nReally need to move to nodejs 20, but it isn't supported on Amazon Linux 2 or\nRHEL 7 family.\nWould require either running in a Docker container or on a newer OS version.\nI think that I'm going to change the prerequisites for the OS distribution\nso that I can stay on the latest tools.\nFor example, I can't update to Python 3.12 until I do this.\n\nUpdate DeconfigureRESUsersGroupsJson to pass if last statement fails.\n\nFix bug in create_slurm_accounts.py\n\nResolves #231","shortMessageHtmlLink":"Add support for ParallelCluster versions 3.9.0 and 3.9.1"}},{"before":"4469261d333aa1fcd9f9caca5abc10ce9c7d5241","after":"9db951366c30fd04eba3c0388a393c36ffb6d3c6","ref":"refs/heads/229-feature-add-support-for-parallelcluster-version-390-and-391","pushedAt":"2024-05-13T20:18:55.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"cartalla","name":"Allan Carter","path":"/cartalla","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/24233076?s=80&v=4"},"commit":{"message":"Add support for ParallelCluster versions 3.9.0 and 3.9.1\n\nAdd support for rhel9 and rocky9.\nHad to update some of the ansible playbooks to mimic rhel8 changes.\n\nResolves #229\n\nSet SubmitterInstanceTags based on RESEnvironmentName.\n\nRemove SubmitterSecurityGroupIds parameter.\nThis option added rules to existing security groups and if they were used by multiple clusters then the number of security group rules would exceed the maximum allowed.\nWith the addition of adding security groups to the head and compute nodes the\ncustomer should supply their own security groups that meet the slurm cluster requirements, attach them to their login nodes and configure them as additional security groups for the head and compute nodes.\n\nResolves #204\n\nUpdate CallSlurmRestApiLambda from Python 3.8 to 3.9.\n\nResolves #230\n\nUpdate CDK version to 2.111.0.\nThis is the latest version supported by nodejs 16.\nReally need to move to nodejs 20, but it isn't supported on Amazon Linux 2 or\nRHEL 7 family.\nWould require either running in a Docker container or on a newer OS version.\nI think that I'm going to change the prerequisites for the OS distribution\nso that I can stay on the latest tools.\nFor example, I can't update to Python 3.12 until I do this.\n\nUpdate DeconfigureRESUsersGroupsJson to pass if last statement fails.","shortMessageHtmlLink":"Add support for ParallelCluster versions 3.9.0 and 3.9.1"}},{"before":"13741392e71320716957f857740ea0e35f992a0b","after":"4469261d333aa1fcd9f9caca5abc10ce9c7d5241","ref":"refs/heads/229-feature-add-support-for-parallelcluster-version-390-and-391","pushedAt":"2024-05-13T20:17:32.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"cartalla","name":"Allan Carter","path":"/cartalla","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/24233076?s=80&v=4"},"commit":{"message":"Add support for ParallelCluster versions 3.9.0 and 3.9.1\n\nAdd support for rhel9 and rocky9.\nHad to update some of the ansible playbooks to mimic rhel8 changes.\n\nResolves #229\n\nSet SubmitterInstanceTags based on RESEnvironmentName.\n\nRemove SubmitterSecurityGroupIds parameter.\nThis option added rules to existing security groups and if they were used by multiple clusters then the number of security group rules would exceed the maximum allowed.\nWith the addition of adding security groups to the head and compute nodes the\ncustomer should supply their own security groups that meet the slurm cluster requirements, attach them to their login nodes and configure them as additional security groups for the head and compute nodes.\n\nResolves #204\n\nUpdate CallSlurmRestApiLambda from Python 3.8 to 3.9.\n\nResolves #230\n\nUpdate CDK version to 2.111.0.\nThis is the latest version supported by nodejs 16.\nReally need to move to nodejs 20, but it isn't supported on Amazon Linux 2 or\nRHEL 7 family.\nWould require either running in a Docker container or on a newer OS version.\nI think that I'm going to change the prerequisites for the OS distribution\nso that I can stay on the latest tools.\nFor example, I can't update to Python 3.12 until I do this.","shortMessageHtmlLink":"Add support for ParallelCluster versions 3.9.0 and 3.9.1"}},{"before":"08057477240a1da29d309bdc0bd52dee4130bf26","after":"13741392e71320716957f857740ea0e35f992a0b","ref":"refs/heads/229-feature-add-support-for-parallelcluster-version-390-and-391","pushedAt":"2024-05-10T22:42:18.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"cartalla","name":"Allan Carter","path":"/cartalla","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/24233076?s=80&v=4"},"commit":{"message":"Add support for ParallelCluster versions 3.9.0 and 3.9.1\n\nAdd support for rhel9 and rocky9.\nHad to update some of the ansible playbooks to mimic rhel8 changes.\n\nResolves #229\n\nSet SubmitterInstanceTags based on RESEnvironmentName.\n\nRemove SubmitterSecurityGroupIds parameter.\nThis option added rules to existing security groups and if they were used by multiple clusters then the number of security group rules would exceed the maximum allowed.\nWith the addition of adding security groups to the head and compute nodes the\ncustomer should supply their own security groups that meet the slurm cluster requirements, attach them to their login nodes and configure them as additional security groups for the head and compute nodes.\n\nResolves #204","shortMessageHtmlLink":"Add support for ParallelCluster versions 3.9.0 and 3.9.1"}},{"before":"173b715eee330387ecb56f4645f89342854abb2b","after":"08057477240a1da29d309bdc0bd52dee4130bf26","ref":"refs/heads/229-feature-add-support-for-parallelcluster-version-390-and-391","pushedAt":"2024-05-10T22:22:28.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"cartalla","name":"Allan Carter","path":"/cartalla","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/24233076?s=80&v=4"},"commit":{"message":"Add support for ParallelCluster versions 3.9.0 and 3.9.1\n\nAdd support for rhel9 and rocky9.\n\nResolves #229\n\nSet SubmitterInstanceTags based on RESEnvironmentName.\n\nRemove SubmitterSecurityGroupIds parameter.\nThis option added rules to existing security groups and if they were used by multiple clusters then the number of security group rules would exceed the maximum allowed.\nWith the addition of adding security groups to the head and compute nodes the\ncustomer should supply their own security groups that meet the slurm cluster requirements, attach them to their login nodes and configure them as additional security groups for the head and compute nodes.\n\nResolves #204","shortMessageHtmlLink":"Add support for ParallelCluster versions 3.9.0 and 3.9.1"}}],"hasNextPage":true,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"djE6ks8AAAAEX6Ri1wA","startCursor":null,"endCursor":null}},"title":"Activity ยท aws-samples/aws-eda-slurm-cluster"}