You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a Next.js project using the pages router. On search console I can see Google crawling routes like domain-name.com/[page]. How can I stop these from being crawled? A similar issue is discussed here but without a permanent solution: GitHub Discussion.
Additionally, dynamic IDs from NEXT_DATA are being crawled, which have / as a prefix.
Routes like /jkw are also being crawled. If I put these routes in robots.txt file they can change in future.
Is there a permanent solution for this?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Summary
Hi,
I have a Next.js project using the pages router. On search console I can see Google crawling routes like domain-name.com/[page]. How can I stop these from being crawled? A similar issue is discussed here but without a permanent solution: GitHub Discussion.
Additionally, dynamic IDs from NEXT_DATA are being crawled, which have / as a prefix.
Routes like /jkw are also being crawled. If I put these routes in robots.txt file they can change in future.
Is there a permanent solution for this?
Thanks
Additional information
No response
Example
No response
Beta Was this translation helpful? Give feedback.
All reactions