Skip to content

How to create interactive Go lab?

We'll divide this part into 5 sections:

  1. Creating lab metadata
  2. Setting up lab defaults
  3. Setting up lab challenges
  4. Setting up evaluation script
  5. Setting up test file

Introduction

This guide would assume that you already have created an interactive course from your instructor panel. If not, go here and set it up first

Step 1 - Creating lab metadata

  • Add a new item lab in your course curriculum page

  • A new lab item gets added. Click on the edit pencil button on the right. This should open the lab library widget in your instructor panel.

  • You should now be able to write a quick lab name and press on "Create Lab" button. This would create a lab you would be able to edit.

  • Once it is created, click on the "Edit" button and you'll arrive at lab designer view.

This is where you will add metadata to your labs and setup your labs for evaluation. Let's take a look at all the tabs here.

Lab Details

Lab details is the tab where you add two important things:

  • Lab title
  • Lab description

Once both of them are filled, it would appear as following:

Let's move to the next tab now.

Container Image

Container image should be set as Golang. Container image is a hint for us to know ahead of time what the primary language of your lab would be.

Setting a container image

Lab Layout

Lab layout can be used to set a default layout in which the lab boots. We currently support the following layout types:

Terminal + IDE + Browser

This would include everything codedamn has to offer - a terminal at the bottom, an IDE in center (powered by Monaco on desktops, and CodeMirror on mobile phones), and a browser preview of (ideally) what user is working on. This is best if your playground runs a HTTP server.

Terminal + Browser

This layout is for times when you don't need IDE in place, and only want something hosted inside a browser - like a XSS challenge.

Terminal + IDE

This layout is for backend programming without website UI. This would only include a terminal and an IDE - like VS Code. For example - headless E2E testing, writing Python scripts, discord bots, etc.

Terminal only

This would not include anything, except for a terminal. Best for Linux/bash labs where you want users to exclusively work with terminals only.

TIP

You can configure the layout through .cdmrc file too. More information here

Step 2 - Lab Defaults

Lab defaults section include how your lab environment boots. It is one of the most important parts because a wrong default environment might confuse your students. Therefore it is important to set it up properly.

When a codedamn playground boots, it can setup a filesystem for user by default. You can specify what the starting files could be, by specifying a git repository and a branch name:

Lab default repository

INFO

You will find a .cdmrc file in the repository given to you above. It is highly recommend, at this point, that you go through the .cdmrc guide and how to use .cdmrc in playgrounds to understand what .cdmrc file exactly is. Once you understand how to work with .cdmrc come back to this area.

Step 3 - Lab challenges

Next step is to setup challenges and evaluation for your lab. This is the part where your learners can learn the most because they will have to pass certain challenges.

TIP

It is highly recommended for you to watch the video below to understand the architecture

This is the biggest advantage of using codedamn for hosting your course too - to make them truly interactive and hands-on for your users.

Let's start by setting up challenges.

The interface above can be used to add challenges to your lab. You can also add hints to every single challenge you put here.

TIP

When the user runs the lab but fails a few challenges, the hint for that particular failed challenge is displayed to the user.

Step 4 - Evaluation Script

Evaluation script is actually what runs when the user on the codedamn playground clicks on "Run Tests" button.

We have to write a boolean array to the file $UNIT_TEST_OUTPUT_FILE. Since go already has a builtin testing utility, we can use that as follows:

sh
#!/bin/bash
set -e 1

mv $TEST_FILE_NAME /home/damner/code/codedamn_evaluation_test.go

# run test
cd /home/damner/code
go mod init codedamn # assuming you used "codedamn" as the package
go test -json -parallel 1 > codedamn_evaluation_output.json || true

# process results file
cat > processGoResults.js << EOF
const fs = require('fs')

// Read the test results file into memory as a string
const testResults = fs.readFileSync('./codedamn_evaluation_output.json', 'utf8').filter(Boolean)
const lines = testResults.split('\n')

// Create an empty array to store the pass/fail status of each test
const results = []

// Loop through each line and parse it as JSON and check if it is a result line
lines.forEach(line => {
  const output = JSON.parse(line).Output?.trim()
  if (output === undefined) return
  const valid = output.includes("--- PASS") || output.includes("--- FAIL")
  if(!valid) return

  const passed = output.includes('--- PASS')

  // Add the pass/fail status to the array
  results.push(passed)
})

// Write results
fs.writeFileSync(process.env.UNIT_TEST_OUTPUT_FILE, JSON.stringify(results))
EOF

# removing mod.go before testing the result because if it fails we don't need to remove go.mod manually
rm go.mod

# process results
node processGoResults.js

# remove files
rm /home/damner/code/codedamn_evaluation_test.go codedamn_evaluation_output.json processGoResults.js

Let's explain what's going on here:

  • We copy our test script (that we will write in next step) in the same directory as user code /home/damner/code. This way we can access user packages comfortably.
  • We run go mod init since go.mod file is required to be present when you run go test utility. If you have already created a go.mod file (in the default repository file setup), this command would do nothing
  • We now run go test with -json and -parallel 1 flag. We need json flag as we will parse it using a simple Node.js script written later. You can use Go for that parsing too (if you can write an equivalent). We need -parallel 1 so that we process the tests in correct order (since the mapping of the boolean array here is linked to how tests would appear on frontend).
  • Go's test util output is stored in a file called codedamn_evaluation_output.json. This file technically is not a valid JSON since Go streams all output as individual JSON objects.
  • We finally run the Node.js script that splits the file codedamn_evaluation_output.json by newline. We parse individual lines as JSON and only process the objects where the .Output field in JSON is either PASS or FAIL.
  • Since we have -parallel 1, the output order would be preserved on how you wrote the tests.
  • We store this boolean array into the $UNIT_TEST_OUTPUT_FILE which is then visible to the end user.

Note: Be careful with the package imports in go, package name and test file location.

Step 5 - Test file

You will see a button named Edit Test File in the Evaluation tab. Click on it.

When you click on it, a new window will open. This is a test file area.

You can write anything here. Whatever script you write here, can be executed from the Test command to run section inside the evaluation tab we were in earlier.

The point of having a file like this to provide you with a place where you can write your evaluation script. As we decided earlier, you can write a simple Go test here:

go
package codedamn

import "testing"

func TestAdd(t *testing.T) {
    got := add(2, 3)
    if got != 5 {
        t.Errorf("add(2, 3) = %d; want 5", got)
    }
}

This file contains only one test at the moment that would give JSON output stream that will be processed further by our Node.js script earlier. This completes your evaluation script for the lab. Your lab is now almost ready for users.

Verified solution is highly recommended. To setup a verified solution for your lab, once your lab is ready, all you have to do is click on "Test lab", write code that passes your lab, and run that code once.

Once you do that, your lab would be marked a lab having verified solution. It also helps students as we can show them a Monaco diff editor showing the verified solution from the creator (you).

At this point, your lab is complete. You can now link this lab in your course, ready to be served to thousands of students 😃 Watch the video tutorial below to understand: