PowerSolver Tests User Guide
What Is the Test System?
The PowerSolver Test System helps you verify that your planning problems work correctly. It automatically checks that:
- Solutions are valid (feasible)
- Scores match expectations
- Assignments are correct
- The solver produces consistent results
A quality assurance tool that runs your planning problems and tells you if they're working as expected.
When to Use Tests
| Scenario | Why Test? |
|---|---|
| After creating a new problem | Verify constraints work correctly |
| After changing constraints | Ensure changes didn't break anything |
| Before going live | Validate production readiness |
| Troubleshooting | Diagnose why solutions are wrong |
| Regular check-ups | Catch problems early |
Quick Start
Step 1: Open the Test Runner
Navigate to the test runner interface:
- Local:
http://localhost:8080/tests.html - Production: Contact your administrator for the URL
Step 2: Check System Health
Look for green indicators:
- 🟢 Solver Online — The solver API is working
- 🟢 Database Online — Test case storage is working
If either shows red, contact your administrator.
Step 3: Load Test Cases
Click "Reload Tests" to load available test cases from the database.
Step 4: Run a Test
- Find the test you want to run
- Click the ▶ Play button on the test card
- Wait for the result (usually 10-60 seconds)
- Check the status: ✅ Pass or ❌ Fail
Step 5: View Results
Click on the test card to see detailed results including:
- Score breakdown
- Assignment validation
- AI analysis (if enabled)
Understanding the Test Interface
Test Cards
Each test appears as a card showing:
| Element | Description |
|---|---|
| Status Icon | ✅ Pass, ❌ Fail, ⭕ Pending |
| Test Name | Descriptive name of the test |
| Category | Problem type (Employee Rostering, Task Assignment, etc.) |
| Variables | Number of planning variables (e.g., "2V") |
| Entities | Number of entities in the problem |
| Constraints | Number of constraints |
| Score | Latest solution score |
Test Card Buttons
| Button | Icon | What It Does |
|---|---|---|
| Play | ▶ | Run this test |
| Stability Probe | 🔄 | Run multiple times to verify consistency |
| View Details | 👁 | See detailed results |
Understanding Test Results
Status Meanings
| Status | Icon | Color | What It Means |
|---|---|---|---|
| Pass | ✅ | Green | Everything worked correctly |
| Fail | ❌ | Red | Something went wrong |
| Error | ⚠️ | Orange | Technical problem (API, network) |
| Running | 🔄 | Blue | Test is in progress |
| Pending | ⭕ | Gray | Not yet run |
Understanding Scores
Every solution has a score showing how well it meets constraints:
Score: 0hard / -2medium / -150soft
│ │ │
│ │ └── Optimization quality
│ └── Medium constraint penalties
└── Hard constraint violations (must be 0!)
Score Quick Reference
| Score Pattern | Meaning | Good or Bad? |
|---|---|---|
0hard/0medium/0soft | Perfect solution | ✅ Excellent |
0hard/0medium/-Xsoft | Valid solution with soft penalties | ✅ Good |
0hard/-Xmedium/-Ysoft | Valid but with medium penalties | ⚠️ Acceptable |
-Xhard/... | Invalid solution | ❌ Problem! |
The Golden Rule
A solution is only valid if the hard score is 0.
If you see -1hard or worse, the solution violates must-have rules and cannot be used.
Validation Types
The test system performs several checks:
1. Score Validation
Checks if the solution score matches what's expected.
| Check | What It Means |
|---|---|
| Feasibility | Is hard score = 0? |
| Pattern Match | Does score match expected pattern? |
| Range Check | Is score within acceptable range? |
Example:
Expected: 0hard (any soft score)
Actual: 0hard/-150soft
Result: ✅ PASS
2. Assignment Validation
Checks if entities are assigned to the right values.
| Check | What It Means |
|---|---|
| Allowed Values | Is assignment in the allowed set? |
| Required Values | Is a specific value assigned? |
| Any Value | Is something assigned (not null)? |
Example:
Entity: Task-1
Variable: assignedEmployee
Expected: Alice or Bob
Actual: Alice
Result: ✅ PASS
3. AI Validation (Diagnostic Only)
AI provides additional insights but does not affect pass/fail.
| AI Verdict | What It Means |
|---|---|
pass | AI thinks solution looks good |
fail | AI found potential issues |
inconclusive | AI couldn't determine |
AI validation is for extra insights only. A test can pass even if AI says "fail."
Common Test Scenarios
Scenario 1: Verify a New Problem
Goal: Make sure your new planning problem works correctly.
- Create a test case for your problem
- Set expected score pattern (e.g., "0hard")
- Run the test
- Check that status is ✅ Pass
Scenario 2: Regression Testing
Goal: Ensure changes didn't break existing functionality.
- Run all tests after making changes
- Check for any new failures
- Investigate failures to determine if they're expected
Scenario 3: Troubleshooting a Problem
Goal: Figure out why a solution is wrong.
- Run the test with AI validation enabled
- Review the detailed results
- Check constraint violations
- Read AI analysis for insights
Scenario 4: Verify Determinism
Goal: Ensure the solver produces consistent results.
- Click the 🔄 Stability Probe button
- Wait for multiple runs to complete
- Check if all runs produced the same result
- If not deterministic, adjust solver settings
Test Configuration
Basic Settings
| Setting | Default | Description |
|---|---|---|
| Solver Timeout | 30 sec | How long the solver runs |
| Poll Interval | 1000 ms | How often to check for completion |
Timeout Guidelines
| Problem Size | Recommended Timeout |
|---|---|
| Small (< 20 entities) | 10-30 seconds |
| Medium (20-100 entities) | 30-60 seconds |
| Large (100+ entities) | 60-120 seconds |
AI Validation Settings
| Setting | Description |
|---|---|
| Enable AI Validation | Turn AI analysis on/off |
| AI Provider | Which AI service to use |
| AI Model | Which model to use |
| Temperature | 0.0 for consistent results |
Interpreting Common Failures
"Score Mismatch"
What it means: The actual score doesn't match what was expected.
Possible causes:
- Constraints changed
- Expected score is wrong
- Problem data changed
What to do:
- Review the actual vs expected scores
- Check if constraints were modified
- Update expected score if the new result is correct
"Infeasible Solution"
What it means: The solution has hard constraint violations (hard score < 0).
Possible causes:
- Not enough resources
- Conflicting constraints
- Invalid data
What to do:
- Check which hard constraints are violated
- Add more values (employees, time slots, etc.)
- Review constraint configuration
"Assignment Error"
What it means: An entity was assigned to an unexpected value.
Possible causes:
- Constraint not working correctly
- Expected assignment is too strict
- Missing constraint
What to do:
- Check which assignment failed
- Review the constraint that should enforce this
- Update expected assignments if needed
"Timeout"
What it means: The solver didn't finish in time.
Possible causes:
- Problem too large
- Timeout too short
- Solver performance issue
What to do:
- Increase the timeout setting
- Simplify the problem for testing
- Check solver health
"API Error"
What it means: Technical problem communicating with the solver.
Possible causes:
- Solver is offline
- Network issue
- Authentication problem
What to do:
- Check API health status
- Verify network connection
- Contact administrator
Stability Testing
What Is a Stability Probe?
A stability probe runs the same test multiple times to verify the solver produces consistent (deterministic) results.
Why It Matters
If results vary between runs, you can't trust the test results. The solver should produce the same solution for the same problem every time.
Running a Stability Probe
- Click the 🔄 button on a test card
- Wait for all runs to complete (default: 3 runs)
- Check the result:
| Result | Meaning |
|---|---|
| ✅ DETERMINISTIC | All runs produced identical results |
| ❌ NON-DETERMINISTIC | Results varied between runs |
If Results Are Non-Deterministic
Check these settings:
| Setting | Required Value |
|---|---|
| Environment Mode | REPRODUCIBLE |
| Move Thread Count | 1 |
| Termination Type | STEP_COUNT |
| RNG Seed | Any fixed number |
Running Multiple Tests
Run All Tests
- Click "Run All" button
- Tests run one at a time
- View summary when complete
Filter Tests
Use filters to run specific tests:
| Filter | What It Does |
|---|---|
| Category | Show only one problem type |
| Status | Show only pass/fail/pending |
| Search | Find tests by name |
Circuit Breaker
If multiple tests fail in a row, the system pauses to prevent wasted runs. This is called the "circuit breaker."
What to do:
- Fix the underlying issue (usually API problem)
- Click "Reset" to clear the circuit breaker
- Try again
Exporting Results
Why Export?
- Keep a record of test runs
- Share results with team members
- Compare results over time
- Audit trail for compliance
How to Export
- Run the tests you want to export
- Click "Export Results"
- Save the JSON file
What's Included
- All test results
- Scores and validations
- Solver configuration
- Timestamps
- Problem snapshots
Best Practices
For Regular Testing
- ✅ Run tests after any constraint changes
- ✅ Use stability probes for important tests
- ✅ Export results for record-keeping
- ✅ Keep expected scores up to date
For Troubleshooting
- ✅ Enable AI validation for insights
- ✅ Check constraint violations first
- ✅ Review the actual solution
- ✅ Compare with a known-good test
For Production Validation
- ✅ Run all tests before deployment
- ✅ Verify determinism with stability probes
- ✅ Export results for audit
- ✅ Address all failures before go-live
Glossary
| Term | Definition |
|---|---|
| Feasible | Solution with hard score = 0 (valid) |
| Infeasible | Solution with hard score < 0 (invalid) |
| Hard Constraint | Must be satisfied or solution is invalid |
| Soft Constraint | Nice to have, affects optimization quality |
| Deterministic | Same input always produces same output |
| Stability Probe | Multiple test runs to verify consistency |
| Circuit Breaker | Automatic pause after repeated failures |
Quick Reference
Test Status Icons
| Icon | Status |
|---|---|
| ✅ | Pass |
| ❌ | Fail |
| ⚠️ | Error |
| 🔄 | Running |
| ⭕ | Pending |
Score Patterns
| Pattern | Meaning |
|---|---|
0hard | Hard score must be 0 |
feasible | Hard score = 0 |
infeasible | Hard score < 0 |
-Xhard | Expect X hard violations |
Common Actions
| Action | Button |
|---|---|
| Run single test | ▶ on test card |
| Run all tests | "Run All" |
| Check stability | 🔄 on test card |
| View details | 👁 on test card |
| Export results | "Export Results" |
| Reload tests | "Reload Tests" |
Getting Help
In the Test Runner
- Check the API Health status first
- Review error messages in the result details
- Use AI validation for additional insights
Documentation
- Concepts Guide — Understanding scores and constraints
- User Guide — Complete PowerSolver reference
- Quick Start — Tutorials and examples
Support
Email: support@planningpowertools.com
Technical Reference
For advanced users and developers, see the full technical documentation:
timefold-tests/README.md— Technical implementation detailstimefold-tests/Tests_User_Guide.md— Complete technical guidetimefold-tests/IMPLEMENTATION_SUMMARY.md— Architecture overview