samuel-dobrancin-qa/perplexity-ai-evaluation
Structured evaluation of Perplexity AI across three test cases using LLM-as-judge methodology. Assesses source faithfulness, uncertainty calibration, source quality, and completeness.
Stars
—
Forks
—
Language
—
License
—
Category
Last pushed
Apr 04, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/samuel-dobrancin-qa/perplexity-ai-evaluation"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Leonxlnx/agentic-ai-prompt-research
Research into how agentic AI coding assistants work — reconstructed prompt patterns, agent...
antonio0720/writing-intelligence
75 files. 76,327 words. The most advanced writing compiler ever open-sourced — now with a...
jefftriplett/files-to-claude-xml
Use XML tags for long context prompting using Claude's multi-document structure.
m727ichael/context-engineering
Information architecture for AI reasoning. PromptOS + HITL Context Engine. Copy, paste, use
madara88645/Compiler
A tool that compiles messy natural language prompts into a structured intermediate...