You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have 5 worker nodes (8vcpu/64GB) and a coordinator (6vcpu/32GB) in a cluster with trino 389 installed on it. When I am running a query which reads data from a table having 3B rows in it, the query goes hung after reading 300~400M records. I did create a JFR and from Jprofiler I can see many threads are in blocked state. Below are the blocked stated screen shot and also JFR attached. Could you please help me know what configuration I have to optimize or what new configuration I have to add. Thanks in advance.
I have 5 worker nodes (8vcpu/64GB) and a coordinator (6vcpu/32GB) in a cluster with trino 389 installed on it. When I am running a query which reads data from a table having 3B rows in it, the query goes hung after reading 300~400M records. I did create a JFR and from Jprofiler I can see many threads are in blocked state. Below are the blocked stated screen shot and also JFR attached. Could you please help me know what configuration I have to optimize or what new configuration I have to add. Thanks in advance.
myrecording.txt (change the extension to jfr)
Below are the JVM configs
-server
-Xmx28G
-XX:-UseBiasedLocking
-XX:+UseG1GC
-XX:G1HeapRegionSize=32M
-XX:+ExplicitGCInvokesConcurrent
-XX:+ExitOnOutOfMemoryError
-XX:+HeapDumpOnOutOfMemoryError
-XX:-OmitStackTraceInFastThrow
-XX:ReservedCodeCacheSize=512M
-XX:PerMethodRecompilationCutoff=10000
-XX:PerBytecodeRecompilationCutoff=10000
-Djdk.attach.allowAttachSelf=true
-Djdk.nio.maxCachedBufferSize=2000000
-Dlogback.configurationFile=/etc/trino/conf/trino-ranger-plugin-logback.xml
Trino Config below:
query.max-memory=110GB
log.max-total-size=20GB
http-server.http.port=8285
memory.heap-headroom-per-node=6GB
log.max-size=10GB
node-scheduler.include-coordinator=false
query.execution-policy=phased
task.concurrency=32
query.max-total-memory=120GB
task.max-worker-threads=128
query.client.timeout=4h
http-server.https.enabled=false
query.max-memory-per-node=22GB
The text was updated successfully, but these errors were encountered: