Releases: keras-team/keras
Keras Release 2.9.0 RC1
What's Changed
Full Changelog: v2.9.0-rc0...v2.9.0-rc1
Keras Release 2.9.0 RC0
Please see https://github.com/tensorflow/tensorflow/blob/r2.9/RELEASE.md for Keras release notes.
Major Features and Improvements
tf.keras
:- Added
tf.keras.applications.resnet_rs
models. This includes theResNetRS50
,ResNetRS101
,ResNetRS152
,ResNetRS200
,ResNetRS270
,ResNetRS350
andResNetRS420
model architectures. The ResNetRS models are based on the architecture described in Revisiting ResNets: Improved Training and Scaling Strategies - Added
tf.keras.optimizers.experimental.Optimizer
. The reworked optimizer gives more control over different phases of optimizer calls, and is easier to customize. We provide Adam, SGD, Adadelta, AdaGrad and RMSprop optimizers based ontf.keras.optimizers.experimental.Optimizer
. Generally the new optimizers work in the same way as the old ones, but support new constructor arguments. In the future, the symbolstf.keras.optimizers.Optimizer
/Adam
/etc will point to the new optimizers, and the previous generation of optimizers will be moved totf.keras.optimizers.legacy.Optimizer
/Adam
/etc. - Added L2 unit normalization layer
tf.keras.layers.UnitNormalization
. - Added
tf.keras.regularizers.OrthogonalRegularizer
, a new regularizer that encourages orthogonality between the rows (or columns) or a weight matrix. - Added
tf.keras.layers.RandomBrightness
layer for image preprocessing. - Added APIs for switching between interactive logging and absl logging. By default, Keras always writes the logs to stdout. However, this is not optimal in a non-interactive environment, where you don't have access to stdout, but can only view the logs. You can use
tf.keras.utils.disable_interactive_logging()
to write the logs to ABSL logging. You can also usetf.keras.utils.enable_interactive_logging()
to change it back to stdout, ortf.keras.utils.is_interactive_logging_enabled()
to check if interactive logging is enabled. - Changed default value for the
verbose
argument ofModel.evaluate()
andModel.predict()
to"auto"
, which defaults toverbose=1
for most cases and defaults toverbose=2
when used withParameterServerStrategy
or with interactive logging disabled. - Argument
jit_compile
inModel.compile()
now applies toModel.evaluate()
andModel.predict()
. Settingjit_compile=True
incompile()
compiles the model's training, evaluation, and inference steps to XLA. Note thatjit_compile=True
may not necessarily work for all models. - Added DTensor-related Keras APIs under
tf.keras.dtensor
namespace. The APIs are still classified as experimental. You are welcome to try it out. Please check the tutoral and guide on https://www.tensorflow.org/ for more details about DTensor.
- Added
What's Changed
- Update_OptimizerV2.py by @sachinprasadhs in #15819
- Use
assign_sub
when computingmoving_average_update
by @lgeiger in #15773 - Document the verbose parameter in EarlyStopping by @ThunderKey in #15817
- Fix LSTM and GRU cuDNN kernel failure for RaggedTensors. by @foxik in #15756
- A tiny problem in an AttributeError message in base_layer.py by @Aujkst in #15847
- Update training_generator_test.py by @sachinprasadhs in #15876
- Minor correction in RegNet docs by @AdityaKane2001 in #15901
- add scoring methods in Luong-style attention by @old-school-kid in #15867
- refactoring code with List Comprehension by @idiomaticrefactoring in #15924
- added clarifying statement to save_model example text by @soosung80 in #15930
- Update base_conv.py by @AdityaKane2001 in #15943
- Update global_clipnorm by @sachinprasadhs in #15938
- Update callbacks.py by @Cheril311 in #15977
- Applied correct reshaping to metric func sparse_top_k by @dfossl in #15997
- Keras saving/loading: Add a custom object saving test to verify the
keras.utils.register_keras_serializable
flows we are expecting users to follow work, and will continue to work with the new design and implementation coming in. by @copybara-service in #15992 - Metric accuracy bug fixes - Metrics Refactor proposal by @dfossl in #16010
- Make
classifier_activation
argument accessible for DenseNet and NASNet models by @adrhill in #16005 - Copy image utils from keras_preprocessing directly into core keras by @copybara-service in #15975
- Update
keras.callbacks.BackupAndRestore
docs by @lgeiger in #16018 - Updating the definition of an argument in the text_dataset_from_directory function by @shraddhazpy in #16012
- Remove deprecated TF1 Layer APIs
apply()
,get_updates_for()
,get_losses_for()
, and remove theinputs
argument in theadd_loss()
method. by @copybara-service in #16046 - Fixed minor typos by @hdubbs in #16071
- Fix typo in documentation by @futtetennista in #16082
- Issue #16090: Split input_shapes horizontally in utils.vis_utils.plot_model by @RicardFos in #16096
- Docker env setup related changes by @shraddhazpy in #16040
- Fixed EfficientNetV2 b parameter not increasing with each block. by @sebastian-sz in #16145
- Updated args of train_on_batch method by @jvishnuvardhan in #16147
- Binary accuracy bug fixes - Metric accuracy method refactor by @dfossl in #16083
- Fix the corner case for dtensor model layout map. by @copybara-service in #16170
- Fix typo in docstring for
DenseFeatures
by @gadagashwini in #16165 - Fix typo in Returns Section by @chunduriv in #16182
- Some tests misusing assertTrue for comparisons fix by @code-review-doctor in #16073
- Add .DS_Store to .gitignore for macOS users by @tsheaff in #16198
- Solve memory inefficiency in RNNs by @atmguille in #16174
- Update README.md by @ahmedopolis in #16259
- Fix documentation text being mistakenly rendered as code by @guberti in #16253
- Allow single input for merging layers Add, Average, Concatenate, Maximum, Minimum, Multiply by @foxik in #16230
- Mention image dimensions format in image_dataset_from_directory by @nrzimmermann in #16232
- fix thresholded_relu to support list datatype by @old-school-kid in #16277
- Implement all tf interpolation upscaling methods by @Mahrkeenerh in #16249
New Contributors
- @lgeiger made their first contribution in #15773
- @ThunderKey made their first contribution in #15817
- @Aujkst made their first contribution in #15847
- @idiomaticrefactoring made their first contribution in #15924
- @soosung80 made their first contribution in #15930
- @Cheril311 made their first contribution in #15977
- @dfossl made their first contribution in #15997
- @adrhill made their first contribution in #16005
- @shraddhazpy made their first contribution in #16012
- @hdubbs made their first contribution in #16071
- @futtetennista made their first contribution in #16082
- @RicardFos made their first contribution in #16096
- @gadagashwini made their first contribution in #16165
- @chunduriv made their first contribution in #16182
- @code-review-doctor made their first contribution in #16073
- @tsheaff made their first contribution in #16198
- @atmguille made their first contribution in #16174
- @ahmedopolis made their first contribution in #16259
- @guberti made their first contribution in #16253
- @nrzimmermann made their first contribution in #16232
- @Mahrkeenerh made their first contribution in #16249
Full Changelog: v2.8.0-rc0...v2.9.0-rc0
Keras Release 2.8.0
Please see the release history at https://github.com/tensorflow/tensorflow/releases/tag/v2.8.0 for more details.
Keras Release 2.8.0 RC1
What's Changed
Full Changelog: v2.8.0-rc0...v2.8.0-rc1
Keras Release 2.8.0 RC0
Please see https://github.com/tensorflow/tensorflow/blob/r2.8/RELEASE.md for Keras release notes.
tf.keras
:- Preprocessing Layers
- Added a
tf.keras.layers.experimental.preprocessing.HashedCrossing
layer which applies the hashing trick to the concatenation of crossed
scalar inputs. This provides a stateless way to try adding feature crosses
of integer or string data to a model. - Removed
keras.layers.experimental.preprocessing.CategoryCrossing
. Users
should migrate to theHashedCrossing
layer or use
tf.sparse.cross
/tf.ragged.cross
directly. - Added additional
standardize
andsplit
modes toTextVectorization
.standardize="lower"
will lowercase inputs.standardize="string_punctuation"
will remove all puncuation.split="character"
will split on every unicode character.
- Added an
output_mode
argument to theDiscretization
andHashing
layers with the same semantics as other preprocessing layers. All
categorical preprocessing layers now supportoutput_mode
. - All preprocessing layer output will follow the compute dtype of a
tf.keras.mixed_precision.Policy
, unless constructed with
output_mode="int"
in which case output will betf.int64
.
The output type of any preprocessing layer can be controlled individually
by passing adtype
argument to the layer.
- Added a
tf.random.Generator
for keras initializers and all RNG code.- Added 3 new APIs for enable/disable/check the usage of
tf.random.Generator
in keras backend, which will be the new backend for
all the RNG in Keras. We plan to switch on the new code path by default in
tf 2.8, and the behavior change will likely to cause some breakage on user
side (eg if the test is checking against some golden nubmer). These 3 APIs
will allow user to disable and switch back to legacy behavior if they
prefer. In future (eg tf 2.10), we expect to totally remove the legacy
code path (stateful random Ops), and these 3 APIs will be removed as well.
- Added 3 new APIs for enable/disable/check the usage of
tf.keras.callbacks.experimental.BackupAndRestore
is now available as
tf.keras.callbacks.BackupAndRestore
. The experimental endpoint is
deprecated and will be removed in a future release.tf.keras.experimental.SidecarEvaluator
is now available as
tf.keras.utils.SidecarEvaluator
. The experimental endpoint is
deprecated and will be removed in a future release.- Metrics update and collection logic in default
Model.train_step()
is now
customizable via overridingModel.compute_metrics()
. - Losses computation logic in default
Model.train_step()
is now
customizable via overridingModel.compute_loss()
. jit_compile
added toModel.compile()
on an opt-in basis to compile the
model's training step with XLA. Note that
jit_compile=True
may not necessarily work for all models.
- Preprocessing Layers
What's Changed
- Cleanup legacy Keras files by @qlzh727 in #14256
- Sync OSS keras to head. by @qlzh727 in #14300
- Update build script for GPU build. by @copybara-service in #14336
- Move the LossReduction class from tf to Keras. by @copybara-service in #14362
- Update keras API generate script. by @copybara-service in #14418
- Adding extra target that are needed by PIP package dependency. by @copybara-service in #14421
- Add test related targets to PIP package list. by @copybara-service in #14427
- Sync OSS keras to head. by @copybara-service in #14428
- Update visibility setting for keras/tests to enable PIP package testing. by @copybara-service in #14429
- Remove items from PIP_EXCLUDED_FILES which is needed with testing PIP. by @copybara-service in #14431
- Split bins into num_bins and bin_boundaries arguments for discretization by @copybara-service in #14507
- Update pbtxt to use _PRFER_OSS_KERAS=1. by @copybara-service in #14519
- Sync OSS keras to head. by @copybara-service in #14572
- Sync OSS keras to head. by @copybara-service in #14614
- Cleanup the bazelrc and remove unrelated items to keras. by @copybara-service in #14616
- Sync OSS keras to head. by @copybara-service in #14624
- Remove object metadata when saving SavedModel. by @copybara-service in #14697
- Fix static shape inference for Resizing layer. by @copybara-service in #14712
- Make TextVectorization work with list input. by @copybara-service in #14711
- Remove deprecated methods of Sequential model. by @copybara-service in #14714
- Improve Model docstrings by @copybara-service in #14726
- Add migration doc for legacy_tf_layers/core.py. by @copybara-service in #14740
- PR #43417: Fixes #42872: map_to_outputs_names always returns a copy by @copybara-service in #14755
- Rename the keras.py to keras_lib.py to resolve the name conflict during OSS test. by @copybara-service in #14778
- Switch to tf.io.gfile for validating vocabulary files. by @copybara-service in #14788
- Avoid serializing generated thresholds for AUC metrics. by @copybara-service in #14789
- Fix data_utils.py when name ends with
.tar.gz
by @copybara-service in #14777 - Fix lookup layer oov token check when num_oov_indices > len(vocabulary tokens) by @copybara-service in #14793
- Update callbacks.py by @jvishnuvardhan in #14760
- Fix keras metric.result_state when the metric variables are sharded variable. by @copybara-service in #14790
- Fix typos in CONTRIBUTING.md by @amogh7joshi in #14642
- Fixed ragged sample weights by @DavideWalder in #14804
- Pin the protobuf version to 3.9.2 which is same as the TF. by @copybara-service in #14835
- Make variable scope shim regularizer adding check for attribute presence instead of instance class by @copybara-service in #14837
- Add missing license header for leakr check. by @copybara-service in #14840
- Fix TextVectorization with output_sequence_length on unknown input shapes by @copybara-service in #14832
- Add more explicit error message for instance type checking of optimizer. by @copybara-service in #14846
- Set aggregation for variable when using PS Strategy for aggregating variables when running multi-gpu tests. by @copybara-service in #14845
- Remove unnecessary reshape layer in MobileNet architecture by @copybara-service in #14854
- Removes caching of the convolution tf.nn.convolution op. While this provided some performance benefits, it also produced some surprising behavior for users in eager mode. by @copybara-service in #14855
- Output int64 by default from Discretization by @copybara-service in #14841
- add patterns to .gitignore by @haifeng-jin in #14861
- Clarify documentation of DepthwiseConv2D by @vinhill in #14817
- add DepthwiseConv1D layer by @fsx950223 in #14863
- Make model summary wrap by @Llamrei in #14865
- Update the link in Estimator by @hirobf10 in #14901
- Fix
int
given forfloat
args by @SamuelMarks in #14900 - Fix RNN, StackedRNNCells with nested state_size, output_size TypeError issues by @Ending2015a in #14905
- Fix the use of imagenet_utils.preprocess_input within a Lambda layer with mixed precision by @anth2o in #14917
- Fix docstrings in
MultiHeadAttention layer
call argumentreturn_attention_scores
. by @guillesanbri in #14920 - Check if layer has _metrics_lock attribute by @DanBmh in #14903
- Make keras.Model picklable by @adriangb in #14748
- Fix typo in docs by @seanmor5 in #14946
- use getter setter by @fsx950223 in #14948
- Close _SESSION.session in clear_session by @sfreilich in #14414
- Fix keras nightly PIP package build. by @copybara-service in #14957
- Fix EarlyStopping stop at fisrt epoch when patience=0 ; add auc to au⦠by @DachuanZhao in...
Keras Release 2.7.0
Please see the release history at https://github.com/tensorflow/tensorflow/releases/tag/v2.7.0 for more details.
Keras Release 2.7.0 RC2
What's Changed
- Fix tf_idf output mode for lookup layers by @mattdangerw in #15492
- Disable the failing tests due to numpy 1.20 change by @qlzh727 in #15552
Full Changelog: v2.7.0-rc1...v2.7.0-rc2
Keras Release 2.7.0 RC1
Cherrypicked the documentation update for functional model slicing.
Keras Release 2.7.0 RC0
Remove temporary monitoring now that underlying perf issue is resolved PiperOrigin-RevId: 398533606
Keras Release 2.6.0
Keras 2.6.0 is the first release of TensorFlow implementation of Keras in the present repo.
The code under tensorflow/python/keras is considered legacy and will be removed in future releases (tf 2.7 or later). For any user who import tensorflow.python.keras, please update your code to public tf.keras instead.
The API endpoints for tf.keras stay unchanged, but are now backed by the keras PIP package. All Keras-related PRs and issues should now be directed to the GitHub repository keras-team/keras.
For the detailed release notes about tf.keras behavior changes, please take a look for tensorflow release notes.