How I Set Up a Beta Test for 18 Testers in Under an Hour

Using APIs, SQL, and MCP to automate the boring stuff

by Christine Pinto

This isn't a tutorial. It's a postmortem of how I set up a beta test for 18 people without spending my evening copying the same checklist 18 times.

That was my goal when I launched the beta for Wizzo, your QA companion in Slack that helps turn conversations, specs, and screenshots into test cases. It also supports different kinds of collaborative team sessions, for example, around user scenarios, knowledge gaps, and similar quality discussions. I'm building this at my startup Epic Test Quest.

The product itself was ready. The beta setup wasn't.

Each tester needed their own place to report bugs and track progress. They needed realistic products and user stories to work with. And once the beta ended, I needed a clean way to reset everything without leaving test data scattered across tools and databases.

Doing all of that manually would have turned a quick beta launch into hours of repetitive, error-prone work, exactly the kind of setup I knew I'd regret doing by hand.

So instead of clicking through tools and hoping I didn't miss something, I automated the parts that mattered.

I needed to:

  • Give each tester their own tracking thread

  • Create five sample products with real user stories

  • Make cleanup predictable once testing ends

To do that, I relied on three things: GitHub’s GraphQL API, SQL, and MCP (Model Context Protocol). Here’s how it worked.

The Challenge

Picture this: 18 testers, each needing their own space to report bugs. Each needs a checklist to track progress. And I need to see everyone's status at a glance.

Manually, this would mean creating 18 threads, copying the same checklist over and over, and renaming each one. And one typo means starting over.

Then there's test data. Testers need real products to play with. They need user stories to generate test cases from. Setting up fake data in Jira? Another hour, easy.

And when testing ends? Cleanup. Removing test data. Resetting the database. More manual work.

I knew there had to be a better way.

GitHub Discussions + GraphQL API

I chose GitHub Discussions for tester feedback. Why? It's free. It's public. Testers can work at their own pace. And I can see all threads in one place.

But I didn't want to create 18 threads by hand. GitHub's GraphQL API lets you create discussions programmatically, exactly what I needed.

One script creates all 18 threads:

# Array of tester names
TESTERS=(
  "Sarah Chen"
  "Mike Johnson"
  "Priya Patel"
  # ... 13 more names
)

# Create a discussion for each tester
for tester in "${TESTERS[@]}"; do
  gh api graphql -f query='
    mutation($repositoryId: ID!, $categoryId: ID!, $title: String!, $body: String!) {
      createDiscussion(input: {
        repositoryId: $repositoryId
        categoryId: $categoryId
        title: $title
        body: $body
      }) {
        discussion { url }
      }
    }
  ' -f repositoryId="$REPO_ID" \
    -f categoryId="$CATEGORY_ID" \
    -f title="Beta Testing Report - $tester" \
    -f body="$TEMPLATE"
done

The $TEMPLATE variable holds a progress tracker with checkboxes, resource links, bug report format, and feedback form links.

One command. Eighteen personalized threads. Done in under 2 minutes.

Each tester found their name in the Discussions tab. They checked off items as they tested. I watched progress from one page.

SQL: Setting Up and Cleaning Up Test Data

Beta testers need something to test with. For Wizzo, that meant sample products and user stories.

I created 5 real-world products, Uber, Airbnb, Amazon, Stripe, and Chase Banking, each with features and user stories in Jira. This gave testers real scenarios to generate test cases from.

After testing? Cleanup time. Here's where SQL shines.

-- Remove test products
DELETE FROM products
WHERE team_id = 'beta-test-team-id'
AND created_at > '2024-12-01';

-- Clear expired draft test cases
DELETE FROM draft_test_cases
WHERE expires_at < NOW();

I also wrote queries to export metrics before cleanup:

-- How many test cases did each method generate?
SELECT type, COUNT(*) as count
FROM test_cases
WHERE team_id = 'beta-test-team-id'
GROUP BY type;

-- What was our draft-to-save rate?
SELECT
  COUNT(*) as total,
  COUNT(CASE WHEN status = 'promoted' THEN 1 END) as saved,
  ROUND(100.0 * COUNT(CASE WHEN status = 'promoted' THEN 1 END)
        / COUNT(*), 2) as save_rate
FROM draft_test_cases;

These queries told me what worked. Which features did testers use most? Where did they drop off? Data answered those questions.

MCP: AI-Powered Automation

MCP (Model Context Protocol) lets you connect tools directly to AI assistants like Claude. For Wizzo's beta test, I used it to manage Jira data. Instead of clicking through Jira's web interface, I could just ask Claude:

"Show me all unassigned tasks in the SCRUM project."

Or: "Unassign all 47 tasks so we can reset for the next tester."

Here's how MCP connects to Jira:

{
  {
  "mcpServers": {
    "jira": {
      "command": "docker",
      "args": [
        "run",
        "-i",
        "--rm",
        "-e",
        "JIRA_URL=https://epictestquest.atlassian.net",
        "-e",
        "[email protected]",
        "-e",
        "JIRA_API_TOKEN=your-token",
        "mcp-atlassian"
      ]
    }
  }
}

Once connected, Claude could search, update, and manage Jira issues. No clicking required.

I also used MCP with Supabase (our database). Claude could run SQL queries directly. This made debugging much faster during the beta test.

What I Learned

Time saved: About 3-4 hours of manual work became 30 minutes of scripting.

What worked well: GitHub Discussions gave testers freedom to work async, GraphQL made batch operations simple, SQL made cleanup predictable, and MCP let me manage everything from one place.

Tips for your next beta test:

  1. Script everything you'll do twice. If you're creating threads for 5+ people, automate it.

  2. Plan your cleanup first. Know how you'll reset before you start.

  3. Export metrics before deleting data. You'll want those numbers later.

  4. Use MCP if you're doing lots of tool-switching. It saves context-switching time.

Try It Yourself

All the scripts I used are public:

Looking back, was this a lot of automation for just 18 testers? Probably.

But the scripts are still there. The setup is repeatable. Cleanup isn't scary. And the next beta won't cost me another evening of copy-paste work.

For me, that's the real win.

Got questions? Find me on LinkedIn or drop a comment below.

Christine Pinto is an ISTQB-certified and award-winning tester turned tech founder with 18 years of showing teams why QA matters. She's been a manual tester, automation engineer, QA lead, and public speaker. She's now co-founder of Epic Test Quest, building Wizzo with testers like you, to make quality management easier, faster, and more collaborative. The future of development will rely on QA more than ever before, and she wants to ensure that every tester is ready to step into that leadership role. As a native Berliner, she brings a direct, no-nonsense style that keeps QA at the center and treats quality as something built in from the start. Outside of QA, she loves gaming, anime, classical music, and adventure travel.

Reply

or to participate.