PowerSolver Docs
Back to Wizard

PowerSolver Tests User Guide

What Is the Test System?

The PowerSolver Test System helps you verify that your planning problems work correctly. It automatically checks that:

  • Solutions are valid (feasible)
  • Scores match expectations
  • Assignments are correct
  • The solver produces consistent results
Think of it as

A quality assurance tool that runs your planning problems and tells you if they're working as expected.


When to Use Tests

Scenario Why Test?
After creating a new problemVerify constraints work correctly
After changing constraintsEnsure changes didn't break anything
Before going liveValidate production readiness
TroubleshootingDiagnose why solutions are wrong
Regular check-upsCatch problems early

Quick Start

Step 1: Open the Test Runner

Navigate to the test runner interface:

  • Local: http://localhost:8080/tests.html
  • Production: Contact your administrator for the URL

Step 2: Check System Health

Screenshot: Health Status Panel

Look for green indicators:

  • 🟢 Solver Online — The solver API is working
  • 🟢 Database Online — Test case storage is working

If either shows red, contact your administrator.

Step 3: Load Test Cases

Click "Reload Tests" to load available test cases from the database.

Step 4: Run a Test

  1. Find the test you want to run
  2. Click the ▶ Play button on the test card
  3. Wait for the result (usually 10-60 seconds)
  4. Check the status: ✅ Pass or ❌ Fail

Step 5: View Results

Click on the test card to see detailed results including:

  • Score breakdown
  • Assignment validation
  • AI analysis (if enabled)

Understanding the Test Interface

Test Cards

Screenshot: Test Card

Each test appears as a card showing:

Element Description
Status Icon✅ Pass, ❌ Fail, ⭕ Pending
Test NameDescriptive name of the test
CategoryProblem type (Employee Rostering, Task Assignment, etc.)
VariablesNumber of planning variables (e.g., "2V")
EntitiesNumber of entities in the problem
ConstraintsNumber of constraints
ScoreLatest solution score

Test Card Buttons

Button Icon What It Does
PlayRun this test
Stability Probe🔄Run multiple times to verify consistency
View Details👁See detailed results

Understanding Test Results

Status Meanings

Status Icon Color What It Means
PassGreenEverything worked correctly
FailRedSomething went wrong
Error⚠️OrangeTechnical problem (API, network)
Running🔄BlueTest is in progress
PendingGrayNot yet run

Understanding Scores

Every solution has a score showing how well it meets constraints:

Score: 0hard / -2medium / -150soft
       │        │          │
       │        │          └── Optimization quality
       │        └── Medium constraint penalties
       └── Hard constraint violations (must be 0!)

Score Quick Reference

Score Pattern Meaning Good or Bad?
0hard/0medium/0softPerfect solution✅ Excellent
0hard/0medium/-XsoftValid solution with soft penalties✅ Good
0hard/-Xmedium/-YsoftValid but with medium penalties⚠️ Acceptable
-Xhard/...Invalid solution❌ Problem!

The Golden Rule

A solution is only valid if the hard score is 0.

If you see -1hard or worse, the solution violates must-have rules and cannot be used.

Validation Types

The test system performs several checks:

1. Score Validation

Checks if the solution score matches what's expected.

Check What It Means
FeasibilityIs hard score = 0?
Pattern MatchDoes score match expected pattern?
Range CheckIs score within acceptable range?

Example:

Expected: 0hard (any soft score)
Actual: 0hard/-150soft
Result: ✅ PASS

2. Assignment Validation

Checks if entities are assigned to the right values.

Check What It Means
Allowed ValuesIs assignment in the allowed set?
Required ValuesIs a specific value assigned?
Any ValueIs something assigned (not null)?

Example:

Entity: Task-1
Variable: assignedEmployee
Expected: Alice or Bob
Actual: Alice
Result: ✅ PASS

3. AI Validation (Diagnostic Only)

AI provides additional insights but does not affect pass/fail.

AI Verdict What It Means
passAI thinks solution looks good
failAI found potential issues
inconclusiveAI couldn't determine
Note

AI validation is for extra insights only. A test can pass even if AI says "fail."


Common Test Scenarios

Scenario 1: Verify a New Problem

Goal: Make sure your new planning problem works correctly.

  1. Create a test case for your problem
  2. Set expected score pattern (e.g., "0hard")
  3. Run the test
  4. Check that status is ✅ Pass

Scenario 2: Regression Testing

Goal: Ensure changes didn't break existing functionality.

  1. Run all tests after making changes
  2. Check for any new failures
  3. Investigate failures to determine if they're expected

Scenario 3: Troubleshooting a Problem

Goal: Figure out why a solution is wrong.

  1. Run the test with AI validation enabled
  2. Review the detailed results
  3. Check constraint violations
  4. Read AI analysis for insights

Scenario 4: Verify Determinism

Goal: Ensure the solver produces consistent results.

  1. Click the 🔄 Stability Probe button
  2. Wait for multiple runs to complete
  3. Check if all runs produced the same result
  4. If not deterministic, adjust solver settings

Test Configuration

Basic Settings

Setting Default Description
Solver Timeout30 secHow long the solver runs
Poll Interval1000 msHow often to check for completion

Timeout Guidelines

Problem Size Recommended Timeout
Small (< 20 entities)10-30 seconds
Medium (20-100 entities)30-60 seconds
Large (100+ entities)60-120 seconds

AI Validation Settings

Setting Description
Enable AI ValidationTurn AI analysis on/off
AI ProviderWhich AI service to use
AI ModelWhich model to use
Temperature0.0 for consistent results

Interpreting Common Failures

"Score Mismatch"

What it means: The actual score doesn't match what was expected.

Possible causes:

  • Constraints changed
  • Expected score is wrong
  • Problem data changed

What to do:

  1. Review the actual vs expected scores
  2. Check if constraints were modified
  3. Update expected score if the new result is correct

"Infeasible Solution"

What it means: The solution has hard constraint violations (hard score < 0).

Possible causes:

  • Not enough resources
  • Conflicting constraints
  • Invalid data

What to do:

  1. Check which hard constraints are violated
  2. Add more values (employees, time slots, etc.)
  3. Review constraint configuration

"Assignment Error"

What it means: An entity was assigned to an unexpected value.

Possible causes:

  • Constraint not working correctly
  • Expected assignment is too strict
  • Missing constraint

What to do:

  1. Check which assignment failed
  2. Review the constraint that should enforce this
  3. Update expected assignments if needed

"Timeout"

What it means: The solver didn't finish in time.

Possible causes:

  • Problem too large
  • Timeout too short
  • Solver performance issue

What to do:

  1. Increase the timeout setting
  2. Simplify the problem for testing
  3. Check solver health

"API Error"

What it means: Technical problem communicating with the solver.

Possible causes:

  • Solver is offline
  • Network issue
  • Authentication problem

What to do:

  1. Check API health status
  2. Verify network connection
  3. Contact administrator

Stability Testing

What Is a Stability Probe?

A stability probe runs the same test multiple times to verify the solver produces consistent (deterministic) results.

Why It Matters

If results vary between runs, you can't trust the test results. The solver should produce the same solution for the same problem every time.

Running a Stability Probe

  1. Click the 🔄 button on a test card
  2. Wait for all runs to complete (default: 3 runs)
  3. Check the result:
Result Meaning
DETERMINISTICAll runs produced identical results
NON-DETERMINISTICResults varied between runs

If Results Are Non-Deterministic

Check these settings:

Setting Required Value
Environment ModeREPRODUCIBLE
Move Thread Count1
Termination TypeSTEP_COUNT
RNG SeedAny fixed number

Running Multiple Tests

Run All Tests

  1. Click "Run All" button
  2. Tests run one at a time
  3. View summary when complete

Filter Tests

Use filters to run specific tests:

Filter What It Does
CategoryShow only one problem type
StatusShow only pass/fail/pending
SearchFind tests by name

Circuit Breaker

If multiple tests fail in a row, the system pauses to prevent wasted runs. This is called the "circuit breaker."

What to do:

  1. Fix the underlying issue (usually API problem)
  2. Click "Reset" to clear the circuit breaker
  3. Try again

Exporting Results

Why Export?

  • Keep a record of test runs
  • Share results with team members
  • Compare results over time
  • Audit trail for compliance

How to Export

  1. Run the tests you want to export
  2. Click "Export Results"
  3. Save the JSON file

What's Included

  • All test results
  • Scores and validations
  • Solver configuration
  • Timestamps
  • Problem snapshots

Best Practices

For Regular Testing

  • ✅ Run tests after any constraint changes
  • ✅ Use stability probes for important tests
  • ✅ Export results for record-keeping
  • ✅ Keep expected scores up to date

For Troubleshooting

  • ✅ Enable AI validation for insights
  • ✅ Check constraint violations first
  • ✅ Review the actual solution
  • ✅ Compare with a known-good test

For Production Validation

  • ✅ Run all tests before deployment
  • ✅ Verify determinism with stability probes
  • ✅ Export results for audit
  • ✅ Address all failures before go-live

Glossary

Term Definition
FeasibleSolution with hard score = 0 (valid)
InfeasibleSolution with hard score < 0 (invalid)
Hard ConstraintMust be satisfied or solution is invalid
Soft ConstraintNice to have, affects optimization quality
DeterministicSame input always produces same output
Stability ProbeMultiple test runs to verify consistency
Circuit BreakerAutomatic pause after repeated failures

Quick Reference

Test Status Icons

Icon Status
Pass
Fail
⚠️Error
🔄Running
Pending

Score Patterns

Pattern Meaning
0hardHard score must be 0
feasibleHard score = 0
infeasibleHard score < 0
-XhardExpect X hard violations

Common Actions

Action Button
Run single test▶ on test card
Run all tests"Run All"
Check stability🔄 on test card
View details👁 on test card
Export results"Export Results"
Reload tests"Reload Tests"

Getting Help

In the Test Runner

  • Check the API Health status first
  • Review error messages in the result details
  • Use AI validation for additional insights

Documentation

Support

Email: support@planningpowertools.com


Technical Reference

For advanced users and developers, see the full technical documentation:

  • timefold-tests/README.md — Technical implementation details
  • timefold-tests/Tests_User_Guide.md — Complete technical guide
  • timefold-tests/IMPLEMENTATION_SUMMARY.md — Architecture overview