What is asgeirtj/system_prompts_leaks?
Overview
Overview
A fast-growing open-source project on GitHub curates system prompts from leading chatbots, giving developers a clear, centralized view of the instructions that steer how these assistants respond.
- Curates system prompts from leading chatbots to reveal how instructions shape model behavior, tone, and context across conversations.
- Demonstrates how prompts influence responses and conversation flow, helping teams optimize interactions and user experiences.
- Raises questions about data provenance, consent, and responsible sharing, highlighting ethical considerations around sourcing prompts and sharing them publicly.
Scope and boundaries
Clarity you can rely on when studying prompts and model interactions
Here are concise, practical guidelines to help developers and readers understand what’s being analyzed—and what isn’t.
- Public prompts and observable model interactions. The discussion focuses on prompts and responses that are publicly accessible or shareable, without accessing private datasets.
- No private prompts or sensitive user data disclosed. Privacy means avoiding the disclosure of private prompts, personal information, or internal configurations tied to individual users.
- Intended for analysis, education, and responsible discussion—not exploitation. The goal is to learn, improve safety, and promote ethical use rather than misuse or manipulation.
Why this repository matters for developers and researchers
Value for transparency and learning
Transparent, practical insights that accelerate prompt design and debugging.
- Offers a window into prompting strategies that shape responses.
- Strategies include explicit task prompts, role prompts that establish a persona, few-shot examples that illustrate desired formats, and constraints on length or style.
- Seeing these prompts helps you understand why outputs vary in tone, depth, or structure—and how to steer results during development and testing.
- Helps practitioners understand the assumptions and constraints embedded in system prompts.
- System prompts set the baseline for tone, scope, safety, and topic boundaries for a session.
- Transparency helps you anticipate how changes to a system prompt shift outputs and affect interactions with user prompts. In practice, you may see the general intent of a system prompt on some platforms, while others keep the exact text private.
- Common built-in assumptions include who the model assumes the user is, what is considered safe to say, and how verbose or concise the replies should be.
Ethical and safety considerations
Ethics and safety must comprehensive-guide-to-its-meaning-importance-and-key-aspects/”>guide your work with public prompts and the latest developer tools.
- Public prompts can raise privacy and policy concerns if misused. They may reveal sensitive data or enable prompt injection or other manipulations that bypass safeguards or violate platform terms. To mitigate risk, minimize data exposure, avoid sharing prompts that could disclose personal or proprietary information, and ensure compliance with privacy laws and the terms of service.
- Responsible research and disclosure are essential. Follow ethical guidelines, obtain permission when required for testing on systems, report vulnerabilities to vendors or affected parties, and document your methodology so others can learn and improve safety without causing harm.
How the project is organized and how to contribute
Repo structure and contents
Get oriented fast: here’s what’s inside and how to use it.
- README.md: A concise, human-friendly guide with setup steps, quick-start examples, and the project’s purpose.
- LICENSE: The terms that govern use, modification, and redistribution of the code and data.
- CONTRIBUTING.md: Clear guidelines for contributing—how to file issues, request features, and submit pull requests.
- prompts/ (a prompts directory): A collection of prompts stored in JSON, JSONL, or CSV formats, often with per-prompt metadata.
Browsing the prompts
- Model: Browse by model or model family. Prompts may include a model tag to filter for a specific model.
- Prompt category: Filter by category or use-case, such as coding, documentation, or data analysis.
- Prompt type: Filter by type—instruction-based, few-shot, chain-of-thought, etc.—to compare strategies.
Contributing responsibly
Contribute with clarity and integrity: respect licenses, credit authors, and support maintainers and users. Follow these two essential practices to guide your contributions.
- Follow license terms and cite sources.
- Identify the project’s license (e.g., MIT, Apache 2.0, GPL) by checking the LICENSE file or headers in the code.
- Comply with the license when reusing, modifying, or distributing code or documentation. This may include attribution, preserving notices, and releasing changes under the same license when required.
- Credit original sources when you copy or adapt material. If attribution is required, include it in code comments, documentation, or release notes as specified by the license.
- Document provenance in commits or pull requests by linking to the original source where the license allows.
- Report sensitive or misleading content through issues or the project’s official channels.
- Identify content that could pose security, privacy, copyright, or factual reliability concerns.
- Use the project’s official channels to report issues, or contact maintainers through the listed methods (issues, mailing lists, or chat).
- Provide a clear, reproducible report with steps to reproduce, expected vs. actual behavior, affected versions, and relevant source links.
- Be constructive and respectful. Focus on the content and how to fix it, offer suggestions or fixes if possible, and redact sensitive data as needed.
Adopting these practices builds trust, keeps projects healthy, and makes collaboration smoother for everyone.
Ethical, legal, and practical considerations when using or sharing prompts
Safety and policy implications
Safety and policy implications
We build safer AI systems by understanding how prompts and tooling interact with safeguards.
- Prompts can be used to jailbreak or bypass safeguards if misused; analyses should avoid enabling such misuse.
- Promote careful handling and responsible disclosure of findings.
- Design and enforce safeguards, governance frameworks, and clear usage policies to minimize risk to users and society.
- Encourage responsible collaboration with researchers and the public, including timely disclosure of issues to vendors or the community when appropriate.
Legal and privacy caveats
Legal and privacy considerations for developers and teams working with prompts, models, and data. Use these guidelines to stay compliant while evaluating new tools.
- Respect licensing terms and platform policies. Licenses define how software, datasets, and model artifacts may be used, modified, shared, or redistributed. Platform policies—terms of service, acceptable use, and privacy notices—govern allowable activities and data handling. Stay compliant by reviewing the license and policies, honoring attribution and redistribution requirements, and avoiding prohibited uses.
- Avoid sharing prompts that could enable wrongdoing or breach terms of service. Do not distribute prompts that meaningfully facilitate illegal activity (e.g., fraud, malware development) or that bypass security controls or data access rules. If you must share prompts, redact sensitive details, include safety notes, and follow the provider’s terms and best-practice guidelines to minimize misuse.
- Be mindful of privacy and data handling. Providers may log, monitor, or share inputs and outputs. Avoid sharing personal or sensitive data when possible; consult privacy policies and data-processing terms; prefer local or on-premises deployments for stronger privacy controls; implement data minimization, encryption, access controls, and explicit consent where required.

Leave a Reply