neurolink and mcp-ai-agent-guidelines

neurolink
70
Verified
mcp-ai-agent-guidelines
55
Established
Maintenance 13/25
Adoption 9/25
Maturity 24/25
Community 24/25
Maintenance 13/25
Adoption 4/25
Maturity 24/25
Community 14/25
Stars: 112
Forks: 95
Downloads:
Commits (30d): 0
Language: TypeScript
License: MIT
Stars: 5
Forks: 3
Downloads:
Commits (30d): 0
Language: TypeScript
License: MIT
No risk flags
No risk flags

About neurolink

juspay/neurolink

Universal AI Development Platform with MCP server integration, multi-provider support, and professional CLI. Build, test, and deploy AI applications with multiple ai providers.

Abstracts multi-provider LLM communication as composable token streams using a pipe-based architecture, unifying 13 AI providers (OpenAI, Anthropic, Google, AWS Bedrock, Azure, etc.) under a single TypeScript API. Built-in features include 64+ MCP server tools, Redis-backed persistent memory with LLM-powered condensation, context window auto-compaction with per-provider token estimation, RAG with hybrid search and reranking, and multi-provider failover for cost optimization. Deployable via professional CLI or as HTTP servers (Hono, Express, Fastify, Koa) with full observability hooks for existing OpenTelemetry instrumentation.

About mcp-ai-agent-guidelines

Anselmoo/mcp-ai-agent-guidelines

A Model Context Protocol (MCP) server offering professional tools and templates for hierarchical prompting, code hygiene, visualization, memory optimization, and agile planning.

Implements agent-to-agent orchestration with tool chaining and context propagation, enabling multi-step workflows where tools invoke each other with shared state. Built as a Node.js MCP server with TypeScript, it exposes specialized tools for code quality analysis (hygiene scoring 0-100), flow-based prompting, Mermaid diagram generation, and dependency-aware sprint planning, integrating via stdio transport with Claude and compatible AI agents.

Scores updated daily from GitHub, PyPI, and npm data. How scores work