-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error: Maximum update depth exceeded. When using useCompletion hook in nextjs, on long response on gpt-4o #1610
Comments
Same here |
Hey, I don't know if you've already solved the problem, but I managed to fix it, and maybe it can help you. My issue was that I was using
This was causing multiple updates. I solved the problem by passing
I hope this helps you. |
I also experience this issue. I only have it using the route hadler. After changing to the new rsc/ai i have not seen it. It must have something to do with the re-renders when streaming.
So after reading this message here i think i finely solved the issue... Spend so much time on it xD So before i had the chat message displayed like this here:
and then in the output return
This caused exhaustions of the max states depth. However, after changing the the structure to:
the lag issues and the maximum update depth exceeded seems to have disappeared completely. In Dev i was only able to ever get many 4 or 5 messages before chrome just gave up, now i can get pretty much as many as i want! |
I tried to reproduce the bug with @Jerry-VW Can you provide me with code to reproduce? Ideally some modification of the next/useCompletion example (which I tried w gpt-4o): |
I have this older branch of my example project where i have the exact same problem using the "useChat" and an api route. https://github.com/ElectricCodeGuy/SupabaseAuthWithSSR/tree/0643777a348d63b7d0b0260ad99e5054be1b4062 In dev environment it would always lag out and crash my browser after a few msg. In production i have not experienced the same level of lag. Here it is on par with other chatbots i have tried. So after maybe 20 msg the UI can begin to lock up. |
I discovered that when using the streaming method, errors occur when the response message reaches a certain length. Even with streaming, no errors occur if the response message is short. I suspect that this issue arises because the component updates every time a streaming input comes in. |
I'm looking for a minimal examples because it's unclear to me whether this is an issue with useChat / useCompletion or with the other React code. @ElectricCodeGuy your examples has a lot of other code, which makes it hard to pinpoint the issue @Jerry-VW is this for a single response / completion or for a long chat? @choipd i tried to produce a very long message (max tokens) with no issues. however, i have a pretty fast machine and that might also play a role here |
I ran into this trying to render
I think the problem is React picking up on the |
Description
Use useCompletion from AI SDK to call gpt-4o that will have a long response in streaming mode.
It will hang the UI. Looks like its updating completion's state in a very fast pace.
Code example
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: