tiny-cuda-nn and tiny-dnn

These are competitors, as both are C++ deep learning frameworks designed for lightweight neural networks, but "tiny-cuda-nn" leverages CUDA for accelerated performance while "tiny-dnn" emphasizes header-only and dependency-free usage.

tiny-cuda-nn
53
Established
tiny-dnn
51
Established
Maintenance 6/25
Adoption 10/25
Maturity 16/25
Community 21/25
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 25/25
Stars: 4,430
Forks: 550
Downloads:
Commits (30d): 0
Language: C++
License:
Stars: 6,020
Forks: 1,398
Downloads:
Commits (30d): 0
Language: C++
License:
No Package No Dependents
Stale 6m No Package No Dependents

About tiny-cuda-nn

NVlabs/tiny-cuda-nn

Lightning fast C++/CUDA neural network framework

Provides fully-fused MLP kernels and multiresolution hash grid encodings optimized for neural field applications, with optional JIT compilation that fuses encoding, network, and custom operations into single CUDA kernels for 1.5–5x speedups. Offers a JSON-configurable C++ API supporting various encodings, losses, and optimizers, with Python bindings for PyTorch integration and lower-level CUDA RTC APIs for embedding models directly into application kernels.

About tiny-dnn

tiny-dnn/tiny-dnn

header only, dependency-free deep learning framework in C++14

Supports CPU-optimized inference and training with TBB threading and SIMD vectorization (SSE/AVX), achieving 98.8% accuracy on MNIST. Provides a composable layer-based API covering CNNs, pooling, batch normalization, and modern optimizers (Adam, RMSprop), with optional OpenCL/NNPACK acceleration for convolutions. Can import pre-trained Caffe models and produces deterministic execution without exceptions or garbage collection, targeting embedded deployment.

Scores updated daily from GitHub, PyPI, and npm data. How scores work