tiny-cuda-nn and neural-network-cuda

With tool A being a lightning-fast framework and tool B being a neural network built from scratch, they are **competitors**, as both offer implementations of neural networks using CUDA/C++, but A appears to be a more mature and performant solution.

tiny-cuda-nn
53
Established
neural-network-cuda
46
Emerging
Maintenance 6/25
Adoption 10/25
Maturity 16/25
Community 21/25
Maintenance 2/25
Adoption 9/25
Maturity 16/25
Community 19/25
Stars: 4,430
Forks: 550
Downloads:
Commits (30d): 0
Language: C++
License:
Stars: 87
Forks: 20
Downloads:
Commits (30d): 0
Language: Cuda
License: GPL-3.0
No Package No Dependents
Stale 6m No Package No Dependents

About tiny-cuda-nn

NVlabs/tiny-cuda-nn

Lightning fast C++/CUDA neural network framework

Provides fully-fused MLP kernels and multiresolution hash grid encodings optimized for neural field applications, with optional JIT compilation that fuses encoding, network, and custom operations into single CUDA kernels for 1.5–5x speedups. Offers a JSON-configurable C++ API supporting various encodings, losses, and optimizers, with Python bindings for PyTorch integration and lower-level CUDA RTC APIs for embedding models directly into application kernels.

About neural-network-cuda

BobMcDear/neural-network-cuda

Neural network from scratch in CUDA/C++

Related comparisons

Scores updated daily from GitHub, PyPI, and npm data. How scores work