A while ago, I built a TensorFlow demo for Kotlin/Native. It uses the TensorFlow backend, arranging simple operations into a graph and running it on a session.

The primary client language of TensorFlow is Python, but there are projects to support other programming languages. All clients are based on the same backend, which is accessible through the TensorFlow C API. It provides, for example, the following function for creating a tensor:

extern TF_Tensor* TF_NewTensor(
    TF_DataType, 
    const int64_t* dims, 
    int num_dims,
    void* data, 
    size_t len,
    void (*deallocator)(void* data, size_t len, void* arg),
    void* deallocator_arg);

The demo is built on top of this C API, showing how a TensorFlow client in Kotlin/Native could look like.

Kotlin is a programming language that compiles to the JVM, JavaScript and native code, enabling applications on all major platforms. It was created by JetBrains mainly as a replacement for Java, reducing boilerplate code and increasing error safety while keeping strong interoperability with existing code and first-class tooling support in the IntelliJ (AndroidStudio, CLion, …) IDE family. The language gained a lot of popularity when Google announced it as a first-class language for Android development, next to Java.

Kotlin/Native allows to compile Kotlin code into native binaries, with potential targets such as embedded applications, iOS and WebAssembly. It is currently in pre-release, but turned out to be very usable already for my use case. Similar to how Kotlin is interoperable with Java libraries, Kotlin/Native is compatible with existing C libraries. Compared to C, Kotlin provides many modern language features, such as generics and extension functions, and brings some convenience of the Kotlin standard library into native code, for example for sequence manipulation (filter, map, …).

Build setup

Fortunately, TensorFlow has good documentation on getting started with its C library and even has some instructions on how to build a TensorFlow client in new language. Additionally, I found it helpful to have a look at the test cases to understand the C API. Also, Kotlin/Native’s C interop features are well documented.

The build script performs of the three following steps:

1: Install the TensorFlow binary for CPU or GPU:

curl -s -L "https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-$TF_TYPE-$TF_TARGET-x86_64-1.1.0.tar.gz" 
tar -C $TF_TARGET_DIRECTORY -xz

2: Call the Kotlin/Native tool to generate bindings for the TensorFlow library:

cinterop \
    -def $DIR/src/main/c_interop/tensorflow.def \
    -compilerOpts "$CFLAGS" \
    -target $TARGET \
    -o $DIR/build/c_interop/tensorflow \
    || exit 1

The tensorflow.def specifies the headers that we want to include in the bindings, in our case the complete TensorFlow C API:

headers = tensorflow/c/c_api.h

This step also generates corresponding Kotlin wrappers for the C API, for example:

fun TF_NewTensor(
    arg0: TF_DataType, 
    dims: CValuesRef<int64_tVar>?, 
    num_dims: Int, 
    data: CValuesRef<*>?, 
    len: size_t, 
    deallocator: CPointer<CFunction<(COpaquePointer?, size_t, COpaquePointer?) -> Unit>>?, 
    deallocator_arg: CValuesRef<*>?
): CPointer<TF_Tensor>? { ... }

3: Calling the Kotlin/Native compiler and linker to obtain an executable:

konanc $COMPILER_ARGS \
    -target $TARGET $DIR/src/main/kotlin/HelloTensorflow.kt \
    -library $DIR/build/c_interop/tensorflow \
    -o $DIR/build/bin/HelloTensorflow \
    -linkerOpts "-L$TF_TARGET_DIRECTORY/lib -ltensorflow" \
    || exit 1

Alternatively to the shell script, an equivalent gradle script is available, streamlining some of this process. Now we are left with fun part:

Writing the client code

To increase readability, we can use Kotlin’s type alias feature:

typealias Status = CPointer<TF_Status>
typealias Operation = CPointer<TF_Operation>
typealias Tensor = CPointer<TF_Tensor>

Most TensorFlow functions take a status argument that allows to check whether there was an error. We can streamline checking for errors by defining the following extension functions:

val Status.isOk: Boolean get() = TF_GetCode(this) == TF_OK
val Status.errorMessage: String get() = TF_Message(this)!!.toKString()
fun Status.delete() = TF_DeleteStatus(this)
fun Status.validate() {
    try {
        if (!isOk) {
            throw Error("Status is not ok: $errorMessage")
        }
    } finally {
        delete()
    }
}

The following function takes a block of code and makes sure that the status is ok after it was executed:

inline fun <T> statusValidated(block: (Status) -> T): T {
    val status = TF_NewStatus()!!
    val result = block(status)
    status.validate()
    return result
}

This simplifies defining operations in our graph class, for example the ones for defining a constant and a graph input:

class Graph {
    val tensorflowGraph = TF_NewGraph()!!

    inline fun operation(type: String, name: String, initDescription: (CPointer<TF_OperationDescription>) -> Unit): Operation {
        val description = TF_NewOperation(tensorflowGraph, type, name)!!
        initDescription(description)
        return statusValidated { TF_FinishOperation(description, it)!! }
    }

    fun constant(value: Int, name: String = "scalarIntConstant") = operation("Const", name) { description ->
        statusValidated { TF_SetAttrTensor(description, "value", scalarTensor(value), it) }
        TF_SetAttrType(description, "dtype", TF_INT32)
    }

    fun intInput(name: String = "input") = operation("Placeholder", name) { description ->
        TF_SetAttrType(description, "dtype", TF_INT32)
    }

    ...

Having defined constants and inputs, we still cannot calculate anything. To add two tesnrofs, we define the addition:

    ...

    fun add(left: Operation, right: Operation, name: String = "add") = memScoped {
        val inputs = allocArray<TF_Output>(2)
        inputs[0].apply { oper = left; index = 0 }
        inputs[1].apply { oper = right; index = 0 }

        operation("AddN", name) { description ->
            TF_AddInputList(description, inputs, 2)
        }
    }

    // TODO set unique operation names
    operator fun Operation.plus(right: Operation) = add(this, right)
}

To allow using the plus sign, we also overloaded the operator. A mechanism ensuring uniqiue operator names would have to be implemented for multi-use, but as we only use the plus operation once in this demo, we don't need it.

The following function allows creating scalar (0-dimensional) tensors:

fun scalarTensor(value: Int): Tensor {
    val data = nativeHeap.allocArray<IntVar>(1)
    data[0] = value

    return TF_NewTensor(TF_INT32,
            dims = null, num_dims = 0,
            data = data, len = IntVar.size,
            deallocator = staticCFunction { dataToFree, _, _ -> nativeHeap.free(dataToFree!!.reinterpret<IntVar>()) },
            deallocator_arg = null)!!
}

In TensorFlow, actual execution of graphs on data is done in sessions. To run a session, we need a function that takes input values assigned to each input and returns output tensors values for the queried outputs:

class Session(val graph: Graph) {    
    ...

    operator fun invoke(
        outputs: List<Operation>, 
        inputsWithValues: List<Pair<Operation, Tensor>> = listOf()
    ): List<Tensor?> {
        setInputsWithValues(inputsWithValues)
        setOutputs(outputs)
    
        return invoke()
    }
    
    ...

The most important bit of the invoke function is the TF_SessionRun call:

    statusValidated {
        TF_SessionRun(tensorflowSession, null,
                inputsCArray, inputValues.toCValues(), inputs.size,
                outputsCArray, outputValuesCArray, outputs.size,
                targets.toCValues(), targets.size,
                null, it)
    }

Implementing the session in full adds some ceremony, mainly due to input/output memory allocation and dispose operations.

I added the following function to the graph class to streamline running sessions:

    inline fun <T> withSession(block: Session.() -> T): T {
        val session = Session(this)
        try {
            return session.block()
        } finally {
            session.dispose()
        }
    }

To put everything together, we define a small graph that adds 2 to any given input. We execute the graph on a session and feed 3 as an input:

fun main(args: Array<String>) {
    println("Hello, TensorFlow ${TF_Version()!!.toKString()}!")

    val result = Graph().run {
        val input = intInput()
        val output = input + constant(2)
        
        withSession { 
            val inputWithValues = listOf(input to scalarTensor(3))
            invoke(output, inputWithValues).scalarIntValue
        }
    }

    println("3 + 2 is $result.")
}

That’s it! We have seen how the TensorFlow backend can be used from Kotlin/Native. The full code is available in the Kotlin/Native repository along with instructions for how to run it. If you have questions or feedback, please comment below.

Some thoughts on the viability of using Kotlin/Native for machine learning are outlined at the end of the follow-up post, which describes how a handwritten digit classifier can be trained in Kotlin/Native using Torch.