New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG]%%pretty with chinese character error #767
Comments
I experience the same issue. |
I experience the same issue too, when i want to display chinese character,it will return error: thanks to any help,it really confuses me TAT |
This issue should be fixed by this PR, which I just released as part of the 0.20.4 release. I'm marking this as resolved for now, but please let me know if this is not the case after you upgrade. |
thanks for your reply! i found it is correct in livyserver,but returned error in notebook display maybe you can use this dataframe to reappear the problem: df = spark.createDataFrame([("a","你好"),("b","你好")],("key","value")) df.show(5) and it is my versions |
Thanks for the code snippet @baixinzxl. I will investigate once I have bandwidth in the coming weeks. Contributions are welcome if you want to dive into the code! |
thanks, take your time~ |
sorry for disturbing but i wonder if there is any findings about the problem? |
Hey @baixinzxl I haven't forgotten about this. I've been stretched for time and have tried to tackle this twice without success. The relevant code is in this file if you want to take a stab at it! |
Describe the bug
when use %%pretty function, if there is chinese character in the table, the table cannot show properly and return this error.
An internal error was encountered.
Please file an issue at https://github.com/jupyter-incubator/sparkmagic
Error:
Expected DF rows to be uniform width (581)
To Reproduce
%%pretty
df.show()
Versions:
The text was updated successfully, but these errors were encountered: