-
-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
optimize dtypes for hyperopt and backtesting to decrease memory usage #9305
base: develop
Are you sure you want to change the base?
Conversation
Bumps [pandas](https://github.com/pandas-dev/pandas) from 2.0.3 to 2.1.1. - [Release notes](https://github.com/pandas-dev/pandas/releases) - [Commits](pandas-dev/pandas@v2.0.3...v2.1.1) --- updated-dependencies: - dependency-name: pandas dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com>
Bumps [orjson](https://github.com/ijl/orjson) from 3.9.7 to 3.9.9. - [Release notes](https://github.com/ijl/orjson/releases) - [Changelog](https://github.com/ijl/orjson/blob/master/CHANGELOG.md) - [Commits](ijl/orjson@3.9.7...3.9.9) --- updated-dependencies: - dependency-name: orjson dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com>
Bump pandas from 2.0.3 to 2.1.1
….9.9 Bump orjson from 3.9.7 to 3.9.9
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See other comments.
will have a proper look once these are adressed.
hyperopt logs for SampleStrategy with one pair. could give a good insight about gains.
|
my custom strategy with lots of indicators and 60 pair: (these are for indicator calculation step)
huge memory gains can be seen. |
on how many candles? unfortunately that's not visible in the logs - so 730 days can be 730 candles, or 1m candles - where it'll be north of 730_000 candles. In reality, we'll want to benchmark 3 things i think to have something comparable
for each, we'd also want the timing (how long did it take to reduce the size once or 3 times). I'd ignore hyperopt directly - we can interpolate hyperopt from backtesting results - as we know that it'll simply execute the 2nd and 3rd step ( |
# Conflicts: # requirements.txt
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i'm not a huge fan of how this is done (mostly, the change in "history_utils).
Debugging failing test shows the reason:
The "first" df.head()
is
# Additions at the top of the page / top of the function to fix output
pd.set_option('display.precision', 15)
pd.set_option('display.max_columns', 1000)
pd.set_option('display.expand_frame_repr', False)
the open/high/low/close values change.
The reason is probably clear - as it's a rounding issue - but it highlights the reason (and importance) to exclude ohlcv columns.
While this is a small absolute change, it's no longer corresponding to the original exchange candles - without the ability for the user to opt-out of this.
i think we should remove the call in this location (allow loading of the data "as is").
in all other cases, the function should be called with skip_original
- to not modify the exchange data.
if not hist.empty: | ||
hist = reduce_dataframe_footprint(hist) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i'm not a huge fan of putting this here (mostly, because it's non optional, but also because it changes values in a wrong way - see other comment for details).
Summary
optimizes the dtypes of historical data to decrease ram usage.
Quick changelog
pandas
to version 2.1.1 to prevent loss of meaningful decimals.pd.to_numeric
.Bottleneck
to requirements. pandas recommend this library forAccelerates certain types of nan by using specialized cython routines to achieve large speedup.
What's new?
decreases RAM usage especially if lots of indicators are used. pandas defaults to
Float64
but most columns can be downcasted toFloat32
.Problems?
two failing tests could be looked into deeper to understand the cause of failure. I could use some help to understand the cause of this behavior. haven't seen anything else.