Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

--profile does not correctly use credential_process (Appstream) #389

Open
hemandee opened this issue Jul 18, 2023 · 2 comments · Fixed by #407 · May be fixed by #684
Open

--profile does not correctly use credential_process (Appstream) #389

hemandee opened this issue Jul 18, 2023 · 2 comments · Fixed by #407 · May be fixed by #684
Labels
bug Something isn't working

Comments

@hemandee
Copy link

Mountpoint for Amazon S3 version

mountpoint-s3 0.3.0-d0ef0b9

AWS Region

us-west-2

Describe the running environment

Trying to mount S3 bucket in Appstream using IAM Profile

I get an error.

Command:
mount-s3 --read-only --allow-other --profile appstream_machine_role --region us-west-2 -f BUCKET_NAME /mnt/

What happened?

Error:

Caused by:
0: HeadBucket failed for bucket BUCKET_NAME in region us-west-2
1: Client error
2: Unknown response error: MetaRequestResult { response_status: 0, crt_error: Error(6146, "aws-c-auth: AWS_AUTH_SIGNING_NO_CREDENTIALS, Attempt to sign an http request without credentials"), error_response_headers: None, error_response_body: None }

Relevant log output

2023-07-18T18:37:05.280819Z DEBUG awscrt::AWSProfile: Creating profile collection from file at "/home/ImageBuilderAdmin/.aws/config"    
2023-07-18T18:37:05.280877Z TRACE awscrt::AWSProfile: Parsing aws profile line in profile "<None>", current property: "<None>"    
2023-07-18T18:37:05.280927Z TRACE awscrt::AWSProfile: Parsing aws profile line in profile "appstream_machine_role", current property: "<None>"    
2023-07-18T18:37:05.280963Z DEBUG awscrt::AuthCredentialsProvider: (id=0x561d1bb35cd0) Profile credentials provider successfully built config profile collection from file at (/home/ImageBuilderAdmin/.aws/config)    
2023-07-18T18:37:05.280994Z DEBUG awscrt::AWSProfile: Creating profile collection from file at "/home/ImageBuilderAdmin/.aws/credentials"    
2023-07-18T18:37:05.281042Z TRACE awscrt::AWSProfile: Parsing aws profile line in profile "<None>", current property: "<None>"    
2023-07-18T18:37:05.281077Z  WARN awscrt::AWSProfile: Profile declarations in credentials files are not allowed to begin with the "profile" keyword    
2023-07-18T18:37:05.281109Z  WARN awscrt::AWSProfile: Profile Parse context:
 Source File:/home/ImageBuilderAdmin/.aws/credentials
 Line: 1
 Current Profile: <None>
 Current Property: <None>    
2023-07-18T18:37:05.281140Z TRACE awscrt::AWSProfile: Parsing aws profile line in profile "<None>", current property: "<None>"    
2023-07-18T18:37:05.281171Z  WARN awscrt::AWSProfile: Property definition seen outside a profile    
2023-07-18T18:37:05.281200Z  WARN awscrt::AWSProfile: Profile Parse context:
 Source File:/home/ImageBuilderAdmin/.aws/credentials
 Line: 2
 Current Profile: <None>
 Current Property: <None>    
2023-07-18T18:37:05.281233Z DEBUG awscrt::AuthCredentialsProvider: (id=0x561d1bb35cd0) Profile credentials provider successfully built credentials profile collection from file at (/home/ImageBuilderAdmin/.aws/credentials)    
2023-07-18T18:37:05.281268Z  WARN awscrt::AWSProfile: property "credential_process" has value "" replaced during merge    
2023-07-18T18:37:05.281301Z  INFO awscrt::AuthCredentialsProvider: (id=0x561d1bb35cd0) Profile credentials provider attempting to pull credentials from profile "appstream_machine_role"    
2023-07-18T18:37:05.281335Z ERROR awscrt::AuthSigning: (id=0x7f9fc4000ba0) Credentials Provider failed to source credentials with error 2049(aws-c-http: AWS_ERROR_HTTP_HEADER_NOT_FOUND, The specified header was not found)    
2023-07-18T18:37:05.281368Z ERROR awscrt::S3MetaRequest: id=0x561d1ba71d20 Meta request could not sign HTTP request due to error code 6146 (Attempt to sign an http request without credentials)    
2023-07-18T18:37:05.281399Z ERROR awscrt::S3MetaRequest: id=0x561d1ba71d20 Could not prepare request 0x7f9fcc001790 due to error 6146 (Attempt to sign an http request without credentials).
@hemandee hemandee added the bug Something isn't working label Jul 18, 2023
@sauraank sauraank assigned sauraank and monthonk and unassigned sauraank Jul 19, 2023
@monthonk
Copy link
Contributor

Thank you for reporting the issue. It seems like the --profile configuration doesn't work with credential_process right now. The workaround is that you should be able to set it with AWS_PROFILE environment variable instead.

I tested running Mountpoint in Appstream with this command and it works fine for me.

AWS_PROFILE=appstream_machine_role mount-s3 -f BUCKET_NAME /mnt/

We will look into this problem and provide the updates here.

@dannycjones
Copy link
Contributor

The PR was related to this issue and closed it prematurely, we are continuing to look into this issue to fix --profile <PROFILE_NAME>.

@dannycjones dannycjones reopened this Jul 25, 2023
@dannycjones dannycjones assigned dannycjones and unassigned monthonk Jul 25, 2023
@jamesbornholt jamesbornholt changed the title Trying to mount in Appstream --profile does not correctly use credential_process (Appstream) Aug 5, 2023
@dannycjones dannycjones removed their assignment Aug 14, 2023
jamesbornholt added a commit to jamesbornholt/mountpoint-s3 that referenced this issue Apr 1, 2024
We were using the SDK's default retry configuration (actually, slightly
wrong -- it's supposed to be 3 total attempts, but we configured 3
*retries*, so 4 attempts). This isn't a good default for file systems,
as it works out to only retrying for about 2 seconds before giving up,
and applications are rarely equipped to gracefully handle transient
errors.

This change increases the default to 10 total attempts, which takes
about a minute on average. This is in the same ballpark as NFS's
defaults (3 attempts, 60 seconds linear backoff), though still a little
more aggressive. There's probably scope to go even further (20?), but
this is a reasonable step for now.

To allow customers to further tweak this, the S3CrtClient now respects
the `AWS_MAX_ATTEMPTS` environment variable, and its value overrides the
defaults. This is only a partial solution, as SDKs are supposed to also
respect the `max_attempts` config file setting, but we don't have any of
the infrastructure for that today (similar issue as awslabs#389).

Signed-off-by: James Bornholt <bornholt@amazon.com>
jamesbornholt added a commit to jamesbornholt/mountpoint-s3 that referenced this issue Apr 1, 2024
We were using the SDK's default retry configuration (actually, slightly
wrong -- it's supposed to be 3 total attempts, but we configured 3
*retries*, so 4 attempts). This isn't a good default for file systems,
as it works out to only retrying for about 2 seconds before giving up,
and applications are rarely equipped to gracefully handle transient
errors.

This change increases the default to 10 total attempts, which takes
about a minute on average. This is in the same ballpark as NFS's
defaults (3 attempts, 60 seconds linear backoff), though still a little
more aggressive. There's probably scope to go even further (20?), but
this is a reasonable step for now.

To allow customers to further tweak this, the S3CrtClient now respects
the `AWS_MAX_ATTEMPTS` environment variable, and its value overrides the
defaults. This is only a partial solution, as SDKs are supposed to also
respect the `max_attempts` config file setting, but we don't have any of
the infrastructure for that today (similar issue as awslabs#389).

Signed-off-by: James Bornholt <bornholt@amazon.com>
github-merge-queue bot pushed a commit that referenced this issue Apr 2, 2024
…ide (#830)

* Increase default max retries and expose environment variable to override

We were using the SDK's default retry configuration (actually, slightly
wrong -- it's supposed to be 3 total attempts, but we configured 3
*retries*, so 4 attempts). This isn't a good default for file systems,
as it works out to only retrying for about 2 seconds before giving up,
and applications are rarely equipped to gracefully handle transient
errors.

This change increases the default to 10 total attempts, which takes
about a minute on average. This is in the same ballpark as NFS's
defaults (3 attempts, 60 seconds linear backoff), though still a little
more aggressive. There's probably scope to go even further (20?), but
this is a reasonable step for now.

To allow customers to further tweak this, the S3CrtClient now respects
the `AWS_MAX_ATTEMPTS` environment variable, and its value overrides the
defaults. This is only a partial solution, as SDKs are supposed to also
respect the `max_attempts` config file setting, but we don't have any of
the infrastructure for that today (similar issue as #389).

Signed-off-by: James Bornholt <bornholt@amazon.com>

* Surprised Clippy doesn't yell about this

Signed-off-by: James Bornholt <bornholt@amazon.com>

---------

Signed-off-by: James Bornholt <bornholt@amazon.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment