CloudBase-MCP and neurolink

These are complements—CloudBase-MCP provides backend infrastructure and services that can be integrated into Neurolink's multi-provider AI development platform via MCP server integration, allowing developers to use CloudBase as one deployment target among multiple providers.

CloudBase-MCP
73
Verified
neurolink
64
Established
Maintenance 25/25
Adoption 10/25
Maturity 18/25
Community 20/25
Maintenance 13/25
Adoption 9/25
Maturity 18/25
Community 24/25
Stars: 979
Forks: 123
Downloads:
Commits (30d): 310
Language: TypeScript
License: MIT
Stars: 112
Forks: 95
Downloads:
Commits (30d): 0
Language: TypeScript
License: MIT
No Dependents
No risk flags

About CloudBase-MCP

TencentCloudBase/CloudBase-MCP

CloudBase MCP - Connect CloudBase to your AI Agent. Go from AI prompt to live app.

Implements the Model Context Protocol (MCP) to bridge AI IDEs with Tencent CloudBase, enabling AI agents to programmatically deploy full-stack applications through unified APIs for serverless functions, databases, static hosting, and storage. Supports both local Node.js-based execution and cloud-hosted HTTP transport, with built-in AI-optimized templates and automatic debugging via log analysis. Integrates with 15+ AI coding tools (Cursor, WindSurf, CodeBuddy, GitHub Copilot, etc.) and supports Web apps, WeChat mini-programs, and backend services end-to-end.

About neurolink

juspay/neurolink

Universal AI Development Platform with MCP server integration, multi-provider support, and professional CLI. Build, test, and deploy AI applications with multiple ai providers.

Abstracts multi-provider LLM communication as composable token streams using a pipe-based architecture, unifying 13 AI providers (OpenAI, Anthropic, Google, AWS Bedrock, Azure, etc.) under a single TypeScript API. Built-in features include 64+ MCP server tools, Redis-backed persistent memory with LLM-powered condensation, context window auto-compaction with per-provider token estimation, RAG with hybrid search and reranking, and multi-provider failover for cost optimization. Deployable via professional CLI or as HTTP servers (Hono, Express, Fastify, Koa) with full observability hooks for existing OpenTelemetry instrumentation.

Scores updated daily from GitHub, PyPI, and npm data. How scores work