Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[POC][OV] Support OpenVINO as Keras 3 backend #19727

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

rkazants
Copy link

Details: Support OpenVINO as Keras 3 backend. This is inference-only backend. In order to switch on this, define environment variable as follows: os.environ["KERAS_BACKEND"] = "openvino"

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
Copy link

google-cla bot commented May 17, 2024

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

Copy link
Member

@fchollet fchollet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR!

@@ -81,6 +81,32 @@ def __init__(self, inputs, outputs, name=None):
self._nodes_by_depth = nodes_by_depth
self._operations = operations
self._operations_by_depth = operations_by_depth
if backend() == "openvino":
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should not backend-specific modifications to shared abstractions.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @fchollet, OpenVINO does not support eager mode for inference. We should build OV graph preliminary and run inference for the whole graph. So I decided to construct a graph in init and use it for call. Please propose how to avoid backend-specific code here due to this OV specific.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just disable what doesn't work, with a clear error message, at the level of the openvino backend. If openvino is only usable via evaluate and predict, that's ok.

Though tbh since this backend is inference only and doesn't have eager support, it sounds like maybe it should be a backend, but instead should be an export format in model.export().

Copy link
Author

@rkazants rkazants May 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fchollet, please clarify. Your point is to exclude this backend-specific code and configure the backend via _... variables. This configuration should tell Keras 3 that OpenVINO only works in non-eager mode. Am I correct? I just want ops helpers to work only with symbolic tensors ov::Output or ov::Node instances (for graph construction). And I want this graph to be constructed and compiled for device only once for one device and input shape specification. Is it possible to do in Keras 3? Should it be done in OpenVINOTrainer class? And what methods for graph construction and compilation should be implemented in that class?

Or your point only to implement export method? It will allow only to infer using OpenVINO API after model loading and not Keras 3. Please correct me.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So there are two entirely separate options:

  1. Make an openvino backend. If we do this it should have roughly the same scope of functionality as the numpy backend. It should "fit" the backend format. I am not sure this is feasible if openvino has no support for eager execution.
  2. Make model.export()/ExportArchive able to save a model in the openvino format (the model would be coming from either Keras + JAX, or Keras + torch, of Keras + TF). That was folks can train their models in another backend, then export it to openvino.

Copy link
Author

@rkazants rkazants May 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here is an example of how compiled_model is created:

import openvino.runtime.opset9 as ov
from openvino.runtime import Model, Core

# create a model with eltwise divide
x = ov.parameter([1], name="x", dtype=np.int32)
y = ov.parameter([1], name="y", dtype=np.int32)
divide = ov.divide(x, y)
ov_model = Model([divide], [x, y], "model")

# compile the model for CPU device
core = ov.Core()
compiled_model = core.compile_model(ov_model, 'CPU')

At the first step, we build ov::Model ov_model object and compile for CPU then.
@fchollet, may be I can set up a meeting with you on Discord (@rkazants on Discord) to discuss further steps and potential solution?

Really appreciate your time and help,
Roman

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And then you call the compiled_model on numpy data?

That sort of logic would belong in the Trainer class. You can try prototyping an OpenVINO Trainer that works like this.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And then you call the compiled_model on numpy data?

Exactly, it calls compiled_model on numpy data for prediction

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we implement a backend like this, it won't be usable eagerly, but only through Model.evaluate() and Model.predict(). That's quite limiting. It's still feasible though. We won't be able to test it on CI, since most of our tests need eager mode, instead we'd need a new set of integration tests specifically for this backend.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For inference such functionality is sufficient) Let me try to do this.

Best regards,
Roman

@@ -289,6 +291,9 @@ def __init__(
self._convert_input_args = True
# Whether to allow non-tensors as positional arguments in `call()`.
self._allow_non_tensor_positional_args = False
if backend.backend() == "openvino":
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should not be backend-specific.



def cholesky(a):
return np.linalg.cholesky(a)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All of the code in many of these files still refers to numpy. If something isn't implemented, it's better to raise NotImplementedError, so we can keep track of it.

@gbaned gbaned added this to Assigned Reviewer in PR Queue via automation May 18, 2024
@codecov-commenter
Copy link

codecov-commenter commented May 20, 2024

Codecov Report

Attention: Patch coverage is 0.14918% with 2008 lines in your changes are missing coverage. Please review.

Project coverage is 75.22%. Comparing base (515e6dd) to head (76d4c1a).
Report is 12 commits behind head on master.

Files Patch % Lines
keras/src/backend/openvino/numpy.py 0.00% 657 Missing ⚠️
keras/src/backend/openvino/nn.py 0.00% 442 Missing ⚠️
keras/src/backend/openvino/math.py 0.00% 163 Missing ⚠️
keras/src/backend/openvino/image.py 0.00% 149 Missing ⚠️
keras/src/backend/openvino/trainer.py 0.00% 149 Missing ⚠️
keras/src/backend/openvino/rnn.py 0.00% 131 Missing ⚠️
keras/src/backend/openvino/core.py 0.00% 116 Missing ⚠️
keras/src/backend/openvino/random.py 0.00% 76 Missing ⚠️
keras/src/backend/openvino/linalg.py 0.00% 49 Missing ⚠️
keras/src/ops/function.py 3.12% 30 Missing and 1 partial ⚠️
... and 8 more
Additional details and impacted files
@@            Coverage Diff             @@
##           master   #19727      +/-   ##
==========================================
- Coverage   78.52%   75.22%   -3.31%     
==========================================
  Files         498      509      +11     
  Lines       45759    47768    +2009     
  Branches     8455     8754     +299     
==========================================
- Hits        35934    35933       -1     
- Misses       8090    10097    +2007     
- Partials     1735     1738       +3     
Flag Coverage Δ
keras 75.07% <0.14%> (-3.30%) ⬇️
keras-jax 59.33% <0.09%> (-2.61%) ⬇️
keras-numpy 53.92% <0.14%> (-2.38%) ⬇️
keras-tensorflow 60.74% <0.09%> (-2.67%) ⬇️
keras-torch 59.39% <0.09%> (-2.62%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
PR Queue
Assigned Reviewer
Development

Successfully merging this pull request may close these issues.

None yet

4 participants