KotlinDL is a high-level Deep Learning API written in Kotlin and inspired by Keras. Under the hood, it uses TensorFlow Java API and ONNX Runtime API for Java. KotlinDL offers simple APIs for training deep learning models from scratch, importing existing Keras and ONNX models for inference, and leveraging transfer learning for tailoring existing pre-trained models to your tasks.
This project aims to make Deep Learning easier for JVM and Android developers and simplify deploying deep learning models in production environments.
Here's an example of what a classic convolutional neural network LeNet would look like in KotlinDL:
private const val EPOCHS = 3
private const val TRAINING_BATCH_SIZE = 1000
private const val NUM_CHANNELS = 1L
private const val IMAGE_SIZE = 28L
private const val SEED = 12L
private const val TEST_BATCH_SIZE = 1000
private val lenet5Classic = Sequential.of(
Input(
IMAGE_SIZE,
IMAGE_SIZE,
NUM_CHANNELS
),
Conv2D(
filters = 6,
kernelSize = intArrayOf(5, 5),
strides = intArrayOf(1, 1, 1, 1),
activation = Activations.Tanh,
kernelInitializer = GlorotNormal(SEED),
biasInitializer = Zeros(),
padding = ConvPadding.SAME
),
AvgPool2D(
poolSize = intArrayOf(1, 2, 2, 1),
strides = intArrayOf(1, 2, 2, 1),
padding = ConvPadding.VALID
),
Conv2D(
filters = 16,
kernelSize = intArrayOf(5, 5),
strides = intArrayOf(1, 1, 1, 1),
activation = Activations.Tanh,
kernelInitializer = GlorotNormal(SEED),
biasInitializer = Zeros(),
padding = ConvPadding.SAME
),
AvgPool2D(
poolSize = intArrayOf(1, 2, 2, 1),
strides = intArrayOf(1, 2, 2, 1),
padding = ConvPadding.VALID
),
Flatten(), // 3136
Dense(
outputSize = 120,
activation = Activations.Tanh,
kernelInitializer = GlorotNormal(SEED),
biasInitializer = Constant(0.1f)
),
Dense(
outputSize = 84,
activation = Activations.Tanh,
kernelInitializer = GlorotNormal(SEED),
biasInitializer = Constant(0.1f)
),
Dense(
outputSize = 10,
activation = Activations.Linear,
kernelInitializer = GlorotNormal(SEED),
biasInitializer = Constant(0.1f)
)
)
fun main() {
val (train, test) = mnist()
lenet5Classic.use {
it.compile(
optimizer = Adam(clipGradient = ClipGradientByValue(0.1f)),
loss = Losses.SOFT_MAX_CROSS_ENTROPY_WITH_LOGITS,
metric = Metrics.ACCURACY
)
it.logSummary()
it.fit(dataset = train, epochs = EPOCHS, batchSize = TRAINING_BATCH_SIZE)
val accuracy = it.evaluate(dataset = test, batchSize = TEST_BATCH_SIZE).metrics[Metrics.ACCURACY]
println("Accuracy: $accuracy")
}
}
- Library Structure
- How to configure KotlinDL in your project
- KotlinDL, ONNX Runtime, Android, and JDK versions
- Documentation
- Examples and tutorials
- Running KotlinDL on GPU
- Logging
- Fat Jar issue
- Limitations
- Contributing
- Reporting issues/Support
- Code of Conduct
- License
KotlinDL consists of several modules:
kotlin-deeplearning-api
api interfaces and classeskotlin-deeplearning-impl
implementation classes and utilitieskotlin-deeplearning-onnx
inference with ONNX Runtimekotlin-deeplearning-tensorflow
learning and inference with TensorFlowkotlin-deeplearning-visualization
visualization utilitieskotlin-deeplearning-dataset
dataset classes
Modules kotlin-deeplearning-tensorflow
and kotlin-deeplearning-dataset
are only available for desktop JVM, while other artifacts could also be used on Android.
To use KotlinDL in your project, ensure that mavenCentral
is added to the repositories list:
repositories {
mavenCentral()
}
Then add the necessary dependencies to your build.gradle
file.
To start with creating simple neural networks or downloading pre-trained models, just add the following dependency:
// build.gradle
dependencies {
implementation 'org.jetbrains.kotlinx:kotlin-deeplearning-tensorflow:[KOTLIN-DL-VERSION]'
}
// build.gradle.kts
dependencies {
implementation ("org.jetbrains.kotlinx:kotlin-deeplearning-tensorflow:[KOTLIN-DL-VERSION]")
}
Use kotlin-deeplearning-onnx
module for inference with ONNX Runtime:
// build.gradle
dependencies {
implementation 'org.jetbrains.kotlinx:kotlin-deeplearning-onnx:[KOTLIN-DL-VERSION]'
}
// build.gradle.kts
dependencies {
implementation ("org.jetbrains.kotlinx:kotlin-deeplearning-onnx:[KOTLIN-DL-VERSION]")
}
To use the full power of KotlinDL in your project for JVM, add the following dependencies to your build.gradle
file:
// build.gradle
dependencies {
implementation 'org.jetbrains.kotlinx:kotlin-deeplearning-tensorflow:[KOTLIN-DL-VERSION]'
implementation 'org.jetbrains.kotlinx:kotlin-deeplearning-onnx:[KOTLIN-DL-VERSION]'
implementation 'org.jetbrains.kotlinx:kotlin-deeplearning-visualization:[KOTLIN-DL-VERSION]'
}
// build.gradle.kts
dependencies {
implementation ("org.jetbrains.kotlinx:kotlin-deeplearning-tensorflow:[KOTLIN-DL-VERSION]")
implementation ("org.jetbrains.kotlinx:kotlin-deeplearning-onnx:[KOTLIN-DL-VERSION]")
implementation ("org.jetbrains.kotlinx:kotlin-deeplearning-visualization:[KOTLIN-DL-VERSION]")
}
The latest stable KotlinDL version is 0.5.2
, latest unstable version is 0.6.0-alpha-1
.
For more details, as well as for pom.xml
and build.gradle.kts
examples, please refer to
the Quick Start Guide.
You can work with KotlinDL interactively in Jupyter Notebook with the Kotlin kernel. To do so, add the required dependencies in your notebook:
@file:DependsOn("org.jetbrains.kotlinx:kotlin-deeplearning-tensorflow:[KOTLIN-DL-VERSION]")
For more details on installing Jupyter Notebook and adding the Kotlin kernel, check out the Quick Start Guide.
KotlinDL supports an inference of ONNX models on the Android platform. To use KotlinDL in your Android project, add the following dependency to your build.gradle file:
// build.gradle
implementation 'org.jetbrains.kotlinx:kotlin-deeplearning-onnx:[KOTLIN-DL-VERSION]'
// build.gradle.kts
implementation ("org.jetbrains.kotlinx:kotlin-deeplearning-onnx:[KOTLIN-DL-VERSION]")
For more details, please refer to the Quick Start Guide.
This table shows the mapping between KotlinDL, TensorFlow, ONNX Runtime, Compile SDK for Android and minimum supported Java versions.
KotlinDL Version | Minimum Java Version | ONNX Runtime Version | TensorFlow Version | Android: Compile SDK Version |
---|---|---|---|---|
0.1.* | 8 | 1.15 | ||
0.2.0 | 8 | 1.15 | ||
0.3.0 | 8 | 1.8.1 | 1.15 | |
0.4.0 | 8 | 1.11.0 | 1.15 | |
0.5.0-0.5.1 | 11 | 1.12.1 | 1.15 | 31 |
0.5.2 | 11 | 1.14.0 | 1.15 | 31 |
0.6.* | 11 | 1.14.0 | 1.15 | 31 |
- Presentations and videos:
- Deep Learning with KotlinDL (Zinoviev Alexey at Huawei Developer Group HDG UK 2021, slides)
- Introduction to Deep Learning with KotlinDL (Zinoviev Alexey at Kotlin Budapest User Group 2021, slides)
- Change log for KotlinDL
- Full KotlinDL API reference
You do not need prior experience with Deep Learning to use KotlinDL.
We are working on including extensive documentation to help you get started. At this point, please feel free to check out the following tutorials we have prepared:
- Quick Start Guide
- Creating your first neural network
- Importing a Keras model
- Transfer learning
- Transfer learning with Functional API
- Running inference with ONNX models on JVM
- Running inference with ONNX models on Android
For more inspiration, take a look at the code examples in this repository and Sample Android App.
To enable the training and inference on a GPU, please read this TensorFlow GPU Support page and install the CUDA framework to allow calculations on a GPU device.
Note that only NVIDIA devices are supported.
You will also need to add the following dependencies in your project if you wish to leverage a GPU:
// build.gradle
implementation 'org.tensorflow:libtensorflow:1.15.0'
implementation 'org.tensorflow:libtensorflow_jni_gpu:1.15.0'
// build.gradle.kts
implementation ("org.tensorflow:libtensorflow:1.15.0")
implementation ("org.tensorflow:libtensorflow_jni_gpu:1.15.0")
On Windows, the following distributions are required:
- CUDA cuda_10.0.130_411.31_win10
- cudnn-7.6.3
- C++ redistributable parts
For inference of ONNX models on a CUDA device, you will also need to add the following dependencies to your project:
// build.gradle
api 'com.microsoft.onnxruntime:onnxruntime_gpu:1.14.0'
// build.gradle.kts
api("com.microsoft.onnxruntime:onnxruntime_gpu:1.14.0")
To find more info about ONNXRuntime and CUDA version compatibility, please refer to the ONNXRuntime CUDA Execution Provider page.
By default, the API module uses the kotlin-logging library to organize the logging process separately from the specific logger implementation.
You could use any widely known JVM logging library with a Simple Logging Facade for Java (SLF4J) implementation such as Logback or Log4j/Log4j2.
You will also need to add the following dependencies and configuration file log4j2.xml
to the src/resource
folder in your project if you wish to use log4j2:
// build.gradle
implementation 'org.apache.logging.log4j:log4j-api:2.17.2'
implementation 'org.apache.logging.log4j:log4j-core:2.17.2'
implementation 'org.apache.logging.log4j:log4j-slf4j-impl:2.17.2'
// build.gradle.kts
implementation("org.apache.logging.log4j:log4j-api:2.17.2")
implementation("org.apache.logging.log4j:log4j-core:2.17.2")
implementation("org.apache.logging.log4j:log4j-slf4j-impl:2.17.2")
<Configuration status="WARN">
<Appenders>
<Console name="STDOUT" target="SYSTEM_OUT">
<PatternLayout pattern="%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n"/>
</Console>
</Appenders>
<Loggers>
<Root level="debug">
<AppenderRef ref="STDOUT" level="DEBUG"/>
</Root>
<Logger name="io.jhdf" level="off" additivity="true">
<appender-ref ref="STDOUT" />
</Logger>
</Loggers>
</Configuration>
If you wish to use Logback, include the following dependency and configuration file logback.xml
to src/resource
folder in your project
// build.gradle
implementation 'ch.qos.logback:logback-classic:1.4.5'
// build.gradle.kts
implementation("ch.qos.logback:logback-classic:1.4.5")
<configuration>
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
</encoder>
</appender>
<root level="info">
<appender-ref ref="STDOUT"/>
</root>
</configuration>
These configuration files can be found in the examples
module.
There is a known Stack Overflow question and TensorFlow issue with Fat Jar creation and execution on Amazon EC2 instances.
java.lang.UnsatisfiedLinkError: /tmp/tensorflow_native_libraries-1562914806051-0/libtensorflow_jni.so: libtensorflow_framework.so.1: cannot open shared object file: No such file or directory
Despite the fact that the bug describing this problem was closed in the release of TensorFlow 1.14, it was not fully fixed and required an additional line in the build script.
One simple solution is to add a TensorFlow version specification to the Jar's Manifest. Below is an example of a Gradle build task for Fat Jar creation.
// build.gradle
task fatJar(type: Jar) {
manifest {
attributes 'Implementation-Version': '1.15'
}
classifier = 'all'
from { configurations.runtimeClasspath.collect { it.isDirectory() ? it : zipTree(it) } }
with jar
}
// build.gradle.kts
plugins {
kotlin("jvm") version "1.5.31"
id("com.github.johnrengelman.shadow") version "7.0.0"
}
tasks{
shadowJar {
manifest {
attributes(Pair("Main-Class", "MainKt"))
attributes(Pair("Implementation-Version", "1.15"))
}
}
}
Currently, only a limited set of deep learning architectures are supported. Here's the list of available layers:
- Core layers:
Input
,Dense
,Flatten
,Reshape
,Dropout
,BatchNorm
.
- Convolutional layers:
Conv1D
,Conv2D
,Conv3D
;Conv1DTranspose
,Conv2DTranspose
,Conv3DTranspose
;DepthwiseConv2D
;SeparableConv2D
.
- Pooling layers:
MaxPool1D
,MaxPool2D
,MaxPooling3D
;AvgPool1D
,AvgPool2D
,AvgPool3D
;GlobalMaxPool1D
,GlobalMaxPool2D
,GlobalMaxPool3D
;GlobalAvgPool1D
,GlobalAvgPool2D
,GlobalAvgPool3D
.
- Merge layers:
Add
,Subtract
,Multiply
;Average
,Maximum
,Minimum
;Dot
;Concatenate
.
- Activation layers:
ELU
,LeakyReLU
,PReLU
,ReLU
,Softmax
,ThresholdedReLU
;ActivationLayer
.
- Cropping layers:
Cropping1D
,Cropping2D
,Cropping3D
.
- Upsampling layers:
UpSampling1D
,UpSampling2D
,UpSampling3D
.
- Zero padding layers:
ZeroPadding1D
,ZeroPadding2D
,ZeroPadding3D
.
- Other layers:
Permute
,RepeatVector
.
TensorFlow 1.15 Java API is currently used for layer implementation, but this project will be switching to TensorFlow 2.+ in the nearest future. This, however, does not affect the high-level API. Inference with TensorFlow models is currently supported only on desktops.
Read the Contributing Guidelines.
Please use GitHub issues for filing feature requests and bug reports. You are also welcome to join the #kotlindl channel in Kotlin Slack.
This project and the corresponding community are governed by the JetBrains Open Source and Community Code of Conduct. Please make sure you read it.
KotlinDL is licensed under the Apache 2.0 License.