Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chapter-2/1-predict-class MobileNetV3Small #173

Open
zoldaten opened this issue Apr 4, 2023 · 0 comments
Open

chapter-2/1-predict-class MobileNetV3Small #173

zoldaten opened this issue Apr 4, 2023 · 0 comments

Comments

@zoldaten
Copy link

zoldaten commented Apr 4, 2023

chapter ends with MobileNetV2 example. i extend a bit using MobileNetV3, as it is a bit fresher.
but perfomance results shows strange things.

def predict2(img_path):
    img = image.load_img(img_path, target_size=(224, 224))
    model = tf.keras.applications.**MobileNetV2**()
    img_array = image.img_to_array(img)
    img_batch = np.expand_dims(img_array, axis=0)
    img_preprocessed = preprocess_input(img_batch)
    prediction = model.predict(img_preprocessed)
    print(decode_predictions(prediction, top=3)[0])

%timeit -r 3 predict2(IMG_PATH) 

MobileNetV2 gives 912 ms ± 5.02 ms per loop

while as MobileNetV3 gives 1.04 s ± 19.9 ms per loop
from tensorflow.keras.applications import MobileNetV3Small

def predict3(img_path):
    img = image.load_img(img_path, target_size=(224, 224))
    model = tf.keras.applications.MobileNetV3Small()
    img_array = image.img_to_array(img)
    img_batch = np.expand_dims(img_array, axis=0)
    img_preprocessed = preprocess_input(img_batch)    
    prediction = model.predict(img_preprocessed)
    #print(prediction)
    print(decode_predictions(prediction, top=3)[0]) 
%timeit -r 3 predict3(IMG_PATH) 

very strange. MobileNetV3Small should be faster.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant