LSTM-Human-Activity-Recognition and Deep-Learning-for-Human-Activity-Recognition

Both projects offer distinct Keras-based deep learning implementations for human activity recognition, making them **competitors** where one would likely choose one over the other based on the specific architectural preferences (LSTM vs. CNN, DeepConvLSTM, SDAE with LightGBM) or the dataset used in the examples.

Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 25/25
Maintenance 0/25
Adoption 9/25
Maturity 16/25
Community 19/25
Stars: 3,549
Forks: 938
Downloads:
Commits (30d): 0
Language: Jupyter Notebook
License: MIT
Stars: 74
Forks: 17
Downloads:
Commits (30d): 0
Language: Jupyter Notebook
License: MIT
Stale 6m No Package No Dependents
Stale 6m No Package No Dependents

About LSTM-Human-Activity-Recognition

guillaume-chevalier/LSTM-Human-Activity-Recognition

Human Activity Recognition example using TensorFlow on smartphone sensors dataset and an LSTM RNN. Classifying the type of movement amongst six activity categories - Guillaume Chevalier

# Technical Summary Employs a many-to-one LSTM architecture that processes 128-sample time windows of 9-channel inertial sensor data (3-axis accelerometer and gyroscope readings) without extensive feature engineering, relying instead on the recurrent network to automatically learn temporal patterns across sequential measurements. Minimal preprocessing is applied beyond gravity filtering, contrasting with traditional signal-processing-heavy approaches that require manual feature extraction. Built with TensorFlow and includes Jupyter notebook implementations demonstrating end-to-end data loading, model training, and evaluation metrics on the UCI HAR Dataset.

About Deep-Learning-for-Human-Activity-Recognition

takumiw/Deep-Learning-for-Human-Activity-Recognition

Keras implementation of CNN, DeepConvLSTM, and SDAE and LightGBM for sensor-based Human Activity Recognition (HAR).

Scores updated daily from GitHub, PyPI, and npm data. How scores work