You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We can help users who run into common problems by documenting them. This issue is meant to track potential problems that we might want to document as troubleshooting info at some point.
Please add a comment to this issue if you want to add something as a candidate for troubleshooting docs.
Resources
TBD
Which documentation set does this change impact?
Stateful and Serverless
Feature differences
None?
What release is this request related to?
N/A
Collaboration model
The documentation team
The text was updated successfully, but these errors were encountered:
Something to consider adding to the troubleshooting docs:
"Unexpected API Error: ECONNABORTED - timeout of 60000ms exceeded" error when using a proxy for egress connection
Problem: The proxy is buffering the response, resulting in a timeout.
Solution: Configure your proxy to support streaming. The proxy needs to be configured to allow + pass through chunked responses instead of buffering them until the response completes.
Explanation: Elastic uses streaming to make sure the request doesn’t time out, but if the proxy buffers the streamed chunks, the request may still time out on the Kibana side.
some other things we might want to note:
The timeout threshold Kibana can accept is hard-coded to 60s (confirm this before documenting)
Port 443 / 80 is used for the outgoing/egress connection
Problem: The user has been rate-limited by the LLM service
Solution: The user needs to retry after the timeout period. Optionally, the user can increase the rate limit (but this is something the user would do with the LLM provider, not something Elastic would directly do)
Example: In Elastic's Eden Demo environment, we can easily hit 80k tokens/min (per @LucaWintergerst ). The below error was returned when using the assistant through the API.
{
"message": "Error: an error occurred while running the action - Status code: 429. Message: API Error: Too Many Requests - Requests to the ChatCompletions_Create Operation under Azure OpenAI API version 2023-07-01-preview have exceeded token rate limit of your current OpenAI S0 pricing tier. Please retry after 6 seconds. Please go here: https://aka.ms/oai/quotaincrease if you would like to further increase the default rate limit.",
"error": {
"data": null,
"isBoom": true,
"isServer": true,
"output": {
"statusCode": 500,
"payload": {
"statusCode": 500,
"error": "Internal Server Error",
"message": "An internal server error occurred"
},
"headers": {}
}
}
}
Description
We can help users who run into common problems by documenting them. This issue is meant to track potential problems that we might want to document as troubleshooting info at some point.
Please add a comment to this issue if you want to add something as a candidate for troubleshooting docs.
Resources
TBD
Which documentation set does this change impact?
Stateful and Serverless
Feature differences
None?
What release is this request related to?
N/A
Collaboration model
The documentation team
The text was updated successfully, but these errors were encountered: