Kubeasy LogoKubeasy

Contributing Guidelines

Guidelines and best practices for contributing challenges to Kubeasy.

Last updated: March 31, 2026GitHubView on GitHub

Thank you for contributing to Kubeasy! This guide explains how to submit high-quality challenges that provide a great learning experience.

Before you start

Check existing challenges

Browse the challenges repository to:

  • Avoid duplicating existing challenges
  • Understand the current challenge structure
  • Get inspiration from well-designed challenges

Propose your idea

Before investing time in building a challenge, open a GitHub Issue to discuss your idea:

  1. Go to kubeasy-dev/challenges/issues
  2. Create a new issue with the "Challenge Proposal" template
  3. Describe:
    • What Kubernetes concept it teaches
    • The broken scenario
    • Why it's valuable to learn
    • Estimated difficulty and time

This helps avoid duplicate work and ensures your challenge aligns with Kubeasy's goals.

Contribution process

1. Fork and clone

# Fork the repository on GitHub, then:
git clone https://github.com/<your-username>/challenges.git
cd challenges

2. Create a branch

git checkout -b challenge/<challenge-slug>

Use descriptive branch names:

  • challenge/rbac-service-account
  • challenge/network-policy-egress
  • challenge/pvc-storage-class

3. Build the challenge

Follow the structure described in Challenge Structure and Creating Challenges.

Your challenge folder should contain:

<challenge-slug>/
├── challenge.yaml      # Metadata, description, AND objectives
├── manifests/          # Initial broken state
│   ├── deployment.yaml
│   └── ...
├── policies/           # Kyverno policies (prevent bypasses)
│   └── protect.yaml
└── image/              # Optional: custom Docker images
    └── Dockerfile

4. Test thoroughly

See Testing Challenges for comprehensive testing strategies.

Minimum testing requirements:

  • Challenge works on a fresh Kind cluster
  • Problem is reproducible
  • Solution passes all objectives
  • Objective feedback is clear
  • Estimated time is accurate

5. Commit your changes

Use clear, descriptive commit messages:

git add .
git commit -m "feat: add memory-pressure challenge"

Follow conventional commit style:

  • feat: add challenge for network policies
  • fix: correct validation timeout in RBAC challenge
  • docs: improve description for storage class challenge

6. Push and create a pull request

git push origin challenge/<challenge-slug>

Open a pull request on GitHub.

Pull request requirements

PR title format

feat: add <challenge-slug> challenge

Examples:

  • feat: add network-policy-debugging challenge
  • feat: add rbac-service-account challenge

PR description template

## Challenge: <Challenge Title>

### What does this challenge teach?
Briefly explain the Kubernetes concept and why it matters.

### Difficulty and estimated time
- **Difficulty**: easy | medium | hard
- **Estimated time**: X minutes

### Theme
- resources-scaling | networking | rbac-security | volumes-secrets | monitoring-debugging

### Testing
- [x] Tested on fresh Kind cluster
- [x] Verified broken state is reproducible
- [x] Solution passes all objectives
- [x] Objective titles don't reveal the solution
- [x] Kyverno policies prevent bypasses
- [x] Estimated time is accurate

Challenge quality standards

Design principles

1. Single focused concept

Good:

"This challenge teaches how to configure resource requests and limits"

Bad:

"This challenge teaches resource limits, RBAC, network policies, and storage"

Guideline: One clear concept per challenge.

2. Realistic scenarios

Good:

"A deployment fails because the ServiceAccount lacks permissions to read ConfigMaps"

Bad:

"A deployment has exactly 3 typos that you must find"

Guideline: Mirror real production problems, not artificial puzzles.

3. Appropriate difficulty

Match difficulty to complexity:

  • easy: Single issue, clear error messages, common scenarios
  • medium: Multiple related issues, requires debugging
  • hard: Complex interactions, subtle issues, production-like

4. Mystery preserving

Descriptions should show symptoms, not causes. Objectives should check outcomes, not implementations.

Code quality

Manifests

# Good: Clear, minimal, well-commented
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: web-app
  template:
    metadata:
      labels:
        app: web-app
    spec:
      serviceAccountName: web-app-sa  # Lacks permissions
      containers:
      - name: nginx
        image: nginx:1.21

Objectives

# Good: Generic titles, checks outcomes
objectives:
  - key: pod-running
    title: "Pod Ready"
    description: "Web app pod must be running"
    order: 1
    type: condition
    spec:
      target:
        kind: Pod
        labelSelector:
          app: web-app
      checks:
        - type: Ready
          status: "True"

# Bad: Reveals solution, vague key
  - key: check1
    title: "Memory Set to 256Mi"
    description: "Check failed"

Bypass protection

Every challenge should include Kyverno policies to prevent obvious bypasses:

# policies/protect.yaml
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: protect-<challenge-slug>
spec:
  validationFailureAction: Enforce
  rules:
    - name: preserve-image
      match:
        resources:
          kinds: ["Deployment"]
          names: ["web-app"]
          namespaces: ["challenge-*"]
      validate:
        message: "Cannot change the application image"
        pattern:
          spec:
            template:
              spec:
                containers:
                  - name: nginx
                    image: "nginx:1.21"

Review process

What reviewers look for

  1. Correctness

    • Does the challenge deploy with a reproducible broken state?
    • Do all objectives pass after applying the fix?
    • Do Kyverno policies prevent bypasses?
  2. Educational value

    • Does it teach something useful?
    • Is it appropriate for the stated difficulty?
    • Are descriptions mystery-preserving?
  3. Code quality

    • Are manifests minimal and clear?
    • Are objective titles generic (don't reveal solutions)?
    • Do objectives validate outcomes, not implementations?
  4. Testing

    • Has it been tested on a fresh cluster?
    • Is estimated time accurate?
    • Are multiple valid solutions accepted?

Addressing feedback

When reviewers request changes:

  1. Make the requested modifications
  2. Test again to ensure everything still works
  3. Push the changes to your branch
  4. Respond to review comments
git add .
git commit -m "fix: address review feedback"
git push origin challenge/<challenge-slug>

The PR will automatically update.

Coding standards

File naming

  • Use lowercase with hyphens
  • Be descriptive but concise
manifests/deployment.yaml
policies/protect.yaml
image/Dockerfile

YAML formatting

  • Use 2 spaces for indentation
  • Add comments to explain non-obvious configuration (like the intentional bug)
  • Group related resources in the same file with --- separator if needed

Resource naming

Use clear, consistent names:

# Good
metadata:
  name: web-app
  labels:
    app: web-app
    component: frontend

# Bad
metadata:
  name: x
  labels:
    app: app1

Common pitfalls

1. Challenge is too complex

Problem: Trying to teach too many concepts at once.

Fix: Split into multiple challenges, each focusing on one concept.

2. Objectives reveal the solution

Problem: Objective titles or descriptions tell the user what to fix.

Fix: Use generic titles like "Stable Operation" instead of "Memory Limit Increased to 256Mi".

3. Missing bypass protection

Problem: Users can replace the broken app with a simpler working one.

Fix: Add Kyverno policies to protect container images and critical configuration.

4. Estimated time is inaccurate

Problem: Challenge takes much longer than stated.

Fix: Test with someone unfamiliar with the problem and adjust the estimate.

5. Solution has multiple valid approaches

Problem: Users solve it differently than expected.

Fix: This is good! Make sure your objectives validate outcomes (pod is healthy) rather than implementations (memory is 256Mi), so any valid fix passes.

Best practices

Design

  • Start simple -- complexity can be added later
  • Focus on one learning objective
  • Make the broken state obvious but not trivial
  • Use realistic, production-like scenarios

Implementation

  • Use stable, common images (nginx, python, busybox, etc.)
  • Avoid external dependencies when possible
  • Keep resource usage minimal
  • Test on a clean cluster

Documentation

  • Describe symptoms in the challenge description, not the root cause
  • State goals in the objective, not the method to achieve them
  • Never include solutions anywhere in the challenge files

After your PR is merged

Your challenge will be automatically:

  1. Built as an OCI artifact
  2. Published to ghcr.io/kubeasy-dev/challenges/<slug>:latest
  3. Available for users via kubeasy challenge start <slug>

If your challenge includes an image/ directory, the custom Docker image will also be built and published to ghcr.io/kubeasy-dev/<slug>:latest.

Getting help

If you need help contributing:

License

By contributing, you agree that your contributions will be licensed under the same license as the project.

On this page