SIGSEGV in aidl-service-armnn-gpu when initializing TFLite NNAPI Delegate on specific Vendor Devices (Vivo/Mali)


I am developing a custom C++ Inference Engine for Android using the TensorFlow Lite C API. The engine is compiled as a shared library (.so) and loaded via JNI.

The engine works perfectly on Pixel and Samsung devices. However, on specific Chinese vendor devices (specifically a Vivo V2419 running Android 14), the application crashes immediately upon initializing the NNAPI Delegate.

The crash does not occur in my code, but inside the Android Hardware Service process (android.hardware.neuralnetworks@aidl-service-armnn-gpu). Because this is a driver-level Segmentation Fault (SIGSEGV), I cannot catch it using a standard C++ try-catch block, and it brings down my entire application process.

Relevant Logs (LogCat):

tflite  I  Created TensorFlow Lite delegate for NNAPI.

tflite  I  Initialized TensorFlow Lite runtime.

tflite  W  NNAPI SL driver did not implement SL_ANeuralNetworksDiagnostic_registerCallbacks!

TypeManager I  Failed to read /vendor/etc/nnapi_extensions_app_allowlist ; No app allowlisted for vendor extensions use.

tflite  W  NNAPI SL driver did not implement SL_ANeuralNetworksDiagnostic_registerCallbacks!

...
... After a few milliseconds
...

libc  and...tworks@aidl-service-armnn-gpu  A  Fatal signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0xb4000080bd37b900 in tid 20214 (android.hardwar), pid 20202 (android.hardwar)

If I disable NNAPI and run strictly on CPU (TfLiteInterpreterOptionsSetNumThreads), the app runs perfectly on the affected device. The model is a standard Quantized TFLite model that works on other devices and the app has standard permissions.

Here is the code I am using for NNAPI init:

options = TfLiteInterpreterOptionsCreate();
    
    // types must be recognized
    TfLiteNnapiDelegateOptions nnapiOpts = TfLiteNnapiDelegateOptionsDefault();
    nnapiOpts.allow_fp16 = 1; 
    
    nnapiDelegate = TfLiteNnapiDelegateCreate(&nnapiOpts);

    if (nnapiDelegate) {
        TfLiteInterpreterOptionsAddDelegate(options, nnapiDelegate);
        interpreter = TfLiteInterpreterCreate(model, options);

        if (interpreter && TfLiteInterpreterAllocateTensors(interpreter) == kTfLiteOk) {
            LOGI("Using NNAPI delegate");
            valid = true;
            return;
        }
        LOGE("NNAPI Failed");
        if (interpreter) TfLiteInterpreterDelete(interpreter);
        interpreter = nullptr;
        TfLiteNnapiDelegateDelete(nnapiDelegate);
        nnapiDelegate = nullptr;

Since this is a driver-level crash on specific vendor ROMs/Hardware that cannot be caught via try-catch:

Is there a recommended way to safely use NNAPI on Android without crashing the main process? Are there known specific flags (like disabling specific accelerators) that mitigate crashes on armnn-gpu drivers? If you have any other info regarding this issue, please enlighten..

Environment

  • NDK: r26b / r29

  • TensorFlow Lite: 2.16 (C API)

  • Device (To reproduce Crash): Vivo V2419 (Mali-G57 GPU)

  • OS: Android 14

0
Feb 4 at 1:53 PM
User AvatarMuhammad Hassan
#android#c++#android-ndk#tensorflow-lite#nnapi

Accepted Answer

Unfortunately NNAPI has been deprecated and thus you're unlikely to get a fix from Vivo. As Morrison says in the comments, you'll need to use LiteRT and their GPU delegate.

The temporary answer is probably to blacklist certain devices / manufacturers, and send them to CPU.

User AvatarBen Clark
Feb 6 at 1:38 PM
2