Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

☂️ Remove dependencies of Tok from downstream tools #11401

Closed
23 tasks done
dhruvmanila opened this issue May 13, 2024 · 1 comment
Closed
23 tasks done

☂️ Remove dependencies of Tok from downstream tools #11401

dhruvmanila opened this issue May 13, 2024 · 1 comment
Assignees
Labels
tracking A "meta" issue that tracks completion of a bigger task via a list of smaller scoped issues.

Comments

@dhruvmanila
Copy link
Member

dhruvmanila commented May 13, 2024

This issue is to keep track of all the tasks required to remove the Tok dependency from the downstream tools (linter and formatter).

Notes:

  • Certain tasks might need to be clubbed with the parser changes
  • The implementation logic for certain usages isn't finalized yet

Linter

Formatter

Internal document: https://www.notion.so/astral-sh/Downstream-work-items-551b86e104a34054b7192675550a6c25?pvs=4

@dhruvmanila dhruvmanila added the tracking A "meta" issue that tracks completion of a bigger task via a list of smaller scoped issues. label May 13, 2024
@dhruvmanila dhruvmanila self-assigned this May 13, 2024
charliermarsh pushed a commit that referenced this issue May 13, 2024
## Summary

This PR updates `PLE1300` and `PLE1307` to avoid using the lexer.

This is part of #11401 

## Test Plan

`cargo test`
dhruvmanila added a commit that referenced this issue May 13, 2024
## Summary

This PR moves the `W605` rule to the AST checker.

This is part of #11401

## Test Plan

`cargo test`
dhruvmanila added a commit that referenced this issue May 14, 2024
## Summary

This PR updates the `doc_lines_from_tokens` function to use `TokenKind`
instead of `Tok`.

This is part of #11401 

## Test Plan

`cargo test`
dhruvmanila added a commit that referenced this issue May 14, 2024
## Summary

This PR updates the blank line rules checker to use `TokenKind` instead
of `Tok`.

This is part of #11401 

## Test Plan

`cargo test`
dhruvmanila added a commit that referenced this issue May 14, 2024
## Summary

This PR moves the following rules to use `TokenKind` instead of `Tok`:
* `PLE2510`, `PLE2512`, `PLE2513`, `PLE2514`, `PLE2515`
* `E701`, `E702`, `E703`
* `ISC001`, `ISC002`
* `COM812`, `COM818`, `COM819`
* `W391`

I've paused here because the next set of rules
(`pyupgrade::rules::extraneous_parentheses`) indexes into the token
slice but we only have an iterator implementation. So, I want to isolate
that change to make sure the logic is still the same when I move to
using the iterator approach.

This is part of #11401 

## Test Plan

`cargo test`
dhruvmanila added a commit that referenced this issue May 14, 2024
## Summary

This PR follows up from #11420 to move `UP034` to use `TokenKind`
instead of `Tok`.

The main reason to have a separate PR is so that the reviewing is easy.
This required a lot more updates because the rule used an index (`i`) to
keep track of the current position in the token vector. Now, as it's
just an iterator, we just use `next` to move the iterator forward and
extract the relevant information.

This is part of #11401

## Test Plan

`cargo test`
dhruvmanila added a commit that referenced this issue May 28, 2024
## Summary

Part of #11401 

This PR refactors most usages of `lex_starts_at` to use the `Tokens`
struct available on the `Program`.

This PR also introduces the following two APIs:
1. `count` (on `StringLiteralValue`) to return the number of string
literal parts in the string expression
2. `after` (on `Tokens`) to return the token slice after the given
`TextSize` offset

## Test Plan

I don't really have a way to test this currently and so I'll have to
wait until all changes are made so that the code compiles.
dhruvmanila added a commit that referenced this issue May 30, 2024
## Summary

Part of #11401 

This PR refactors most usages of `lex_starts_at` to use the `Tokens`
struct available on the `Program`.

This PR also introduces the following two APIs:
1. `count` (on `StringLiteralValue`) to return the number of string
literal parts in the string expression
2. `after` (on `Tokens`) to return the token slice after the given
`TextSize` offset

## Test Plan

I don't really have a way to test this currently and so I'll have to
wait until all changes are made so that the code compiles.
dhruvmanila added a commit that referenced this issue May 31, 2024
## Summary

This PR removes the `Tok` enum now that all of it's dependencies have
been updated to use `TokenKind` instead.

closes: #11401
@dhruvmanila
Copy link
Member Author

Closed by #11628

dhruvmanila added a commit that referenced this issue May 31, 2024
## Summary

Part of #11401 

This PR refactors most usages of `lex_starts_at` to use the `Tokens`
struct available on the `Program`.

This PR also introduces the following two APIs:
1. `count` (on `StringLiteralValue`) to return the number of string
literal parts in the string expression
2. `after` (on `Tokens`) to return the token slice after the given
`TextSize` offset

## Test Plan

I don't really have a way to test this currently and so I'll have to
wait until all changes are made so that the code compiles.
dhruvmanila added a commit that referenced this issue May 31, 2024
## Summary

This PR removes the `Tok` enum now that all of it's dependencies have
been updated to use `TokenKind` instead.

closes: #11401
dhruvmanila added a commit that referenced this issue Jun 3, 2024
## Summary

Part of #11401 

This PR refactors most usages of `lex_starts_at` to use the `Tokens`
struct available on the `Program`.

This PR also introduces the following two APIs:
1. `count` (on `StringLiteralValue`) to return the number of string
literal parts in the string expression
2. `after` (on `Tokens`) to return the token slice after the given
`TextSize` offset

## Test Plan

I don't really have a way to test this currently and so I'll have to
wait until all changes are made so that the code compiles.
dhruvmanila added a commit that referenced this issue Jun 3, 2024
## Summary

This PR removes the `Tok` enum now that all of it's dependencies have
been updated to use `TokenKind` instead.

closes: #11401
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
tracking A "meta" issue that tracks completion of a bigger task via a list of smaller scoped issues.
Projects
None yet
Development

No branches or pull requests

1 participant