Error messages

AIminify provides comprehensive error handling with clear error messages and actionable suggestions to help you quickly diagnose and resolve issues. All errors follow a consistent format and include specific error codes for easy identification.

Error Response Format

{
"success": False,
"error_code": "ERROR_CODE",
"error_message": "Detailed description of what went wrong",
"suggestion": "Actionable steps to resolve the issue",
"stage": "compression_stage", # When applicable
"recoverable": True / False,
"error_type": "ExceptionClassName"
}

Error types

1. Environment Errors (ENV_ERROR)

Cause: System requirements or dependencies are not met.
Common Scenarios:
  • CUDA toolkit not installed
  • TensorRT not available
  • PyTorch or TensorFlow not installed
  • Incompatible package versions
Example:
{
"error_code": "ENV_ERROR",
"error_message": "CUDA toolkit not found on system",
"suggestion": "Install CUDA toolkit or use CPU-only mode by setting quantization=False",
"recoverable": False
}
Resolution Steps:
    Install the missing component:
  • CUDA: Download from  NVIDIA website 
  • TensorRT: pip install aiminify[tensorrt]
  • PyTorch: pip install torch torchvision
  • TensorFlow: pip install tensorflow
    Alternatively, disable features requiring the missing component:
  • Set quantization=False to skip CUDA-dependent quantization
  • Use CPU-only mode


2. Dependency Errors (DEP_ERROR)

Cause: Required packages are missing or have incompatible versions.
Common Scenarios:
  • Package not installed
  • Package version too old
  • Conflicting package versions
Example:
{
"error_code": "DEP_ERROR",
"error_message": "torch version 1.8.0 is below required minimum 2.0.0",
"package_name": "torch",
"required_version": "2.0.0",
"suggestion": "Install or upgrade torch to version 2.0.0 or higher"
}
Resolution Steps:
    Upgrade the package: pip install --upgrade <package_name>
    Check compatibility with your Python version
    Review AIminify's requirements: pip show aiminify


3. Model Errors (MODEL_ERROR)

Cause: Issues with model structure, type, or compatibility with compression techniques.
Common Scenarios:
  • Model is not FX-traceable (PyTorch)
  • Model architecture incompatible with quantization
  • Model architecture incompatible with pruning
  • Invalid model type (not a PyTorch or TensorFlow model)
Example:
{
"error_code": "MODEL_ERROR",
"error_message": "Model is not FX-traceable: dynamic control flow detected",
"suggestion": "Try using a simpler model architecture or check PyTorch FX compatibility",
"recoverable": False
}
Resolution Steps:
For FX-traceability issues (PyTorch):
  • Avoid dynamic control flow (if/else based on tensor values)
  • Use torch.where() instead of conditional statements
  • Simplify complex model architectures
  • Check  PyTorch FX documentation 
For quantization incompatibilities:
  • Provide validation data for calibration
  • Try disabling quantization: quantization=False
  • Use compression_strength levels 0-2
For pruning incompatibilities:
  • Reduce compression_strength (0-2 for lighter pruning)
  • Disable pruning by using lower compression levels
  • Check model architecture for unsupported layer types


4. Compression Errors (COMPRESSION_ERROR)

Cause: The compression process failed during a specific stage.
Common Scenarios:
  • Pruning stage failures
  • Quantization stage failures
  • Fine-tuning stage failures
Example:
{
"error_code": "COMPRESSION_ERROR",
"error_message": "Quantization failed: TensorRT engine build error",
"stage": "quantization",
"suggestion": "Provide validation data for calibration or disable quantization",
"recoverable": True
}
Resolution Steps by Stage:
Pruning Stage:
  • Reduce compression_strength to a lower level
  • Ensure training data is provided if using smart pruning
  • Check that model layers are supported for pruning
Quantization Stage:
  • Provide calibration data via val_loader or val_dataset
  • Disable quantization: quantization=False
  • Check TensorRT compatibility with your model architecture
  • Verify ONNX export compatibility
Fine-tuning Stage:
  • Verify training and validation data format
  • Check loss function compatibility
  • Disable fine-tuning: fine_tune=False
  • Review training hyperparameters


5. License Errors (LICENSE_ERROR)

Cause: Issues with license validation or authentication.
Common Scenarios:
  • License file not found
  • Invalid license file format
  • License expired
  • Network error contacting license server
Example:
{
"error_code": "LICENSE_ERROR",
"error_message": "License validation failed: license expired",
"suggestion": "Run 'aiminify configure <username>' to set up your license or contact support@aiminify.com",
"recoverable": False
}
Resolution Steps:
    Configure your license: aiminify configure <your_username>
    Verify license file exists at ~/.aiminify/license.json
    Check internet connection (required for validation)
    Contact  support@aiminify.com  if issues persist


6. Validation Errors (VALIDATION_ERROR)

Cause: Invalid input parameters to the minify() function.
Common Scenarios:
  • Invalid compression_strength value (must be 0-5)
  • Invalid precision value (must be 'fp32' or 'mixed')
  • Wrong parameter types
  • Out-of-range values
Example:
{
"error_code": "VALIDATION_ERROR",
"error_message": "compression_strength must be one of [0, 1, 2, 3, 4, 5], got 10",
"parameter_name": "compression_strength",
"valid_values": [0, 1, 2, 3, 4, 5],
"suggestion": "Valid values for compression_strength: [0, 1, 2, 3, 4, 5]"
}
Validated Parameters:
Parameter
Type
Valid Values
Default
compression_strength
int
0-5
3
precision
str
'fp32', 'mixed'
'fp32'
verbose
int
≥ 0
1
accumulation_steps
int
≥ 1
1
gradient_clip_val
float
> 0
None
quantization
bool
True, False
True
fine_tune
bool
True, False
True
smart_pruning
bool
True, False
True
debug_mode
bool
True, False
False
Resolution Steps:
    Check the error message for the specific parameter and valid values
    Ensure all parameters match the expected types
    Review the API documentation for parameter details


Debugging Errors

Enable Debug Mode

For detailed error traces and diagnostic information:
compressed_model, feedback = minify(
model=model,
train_loader=train_loader,
debug_mode=True # Enable full stack traces
)
Debug mode provides:
  • Full exception stack traces
  • Detailed logging at each compression stage
  • Model architecture information
  • Intermediate compression statistics

Check Error Response

Always check the feedback dictionary returned by minify():
compressed_model, feedback = minify(model, train_loader)

if not feedback['success']:
print(f"Error Code: {feedback['error_code']}")
print(f"Error Message: {feedback['error_message']}")
print(f"Suggestion: {feedback['suggestion']}")

# Check which stage failed
if 'stage' in feedback:
print(f"Failed at stage: {feedback['stage']}")

Common Debugging Steps

    Start with lower compression strength
  • Try compression_strength=0 first, then gradually increase
    Disable features selectively
# Disable quantization
minify(model, train_loader, quantization=False)

# Disable fine-tuning
minify(model, train_loader, fine_tune=False)

# Both
minify(model, train_loader, quantization=False, fine_tune=False)
    Verify data loaders
  • Ensure data loaders return (input, target) tuples
  • Check batch sizes are consistent
  • Verify data types match model expectations
    Check model compatibility
import torch

# Test FX tracing (PyTorch)
example_input = torch.randn(1, 3, 224, 224)
traced = torch.fx.symbolic_trace(model)


Error Prevention Best Practices

1. Validate Environment Before Compression

# Check dependencies
import torch
print(f"PyTorch version: {torch.__version__}")
print(f"CUDA available: {torch.cuda.is_available()}")

# Optional: Check TensorRT
try:
import tensorrt as trt
print(f"TensorRT version: {trt.__version__}")
except ImportError:
print("TensorRT not installed (required for quantization)")

2. Start Conservative

# Start with minimal compression
compressed_model, feedback = minify(
model=model,
train_loader=train_loader,
compression_strength=1, # Low compression
quantization=False, # Disable advanced features
fine_tune=False
)

# If successful, gradually increase compression

3. Provide Adequate Data

# For best results, provide all data loaders
compressed_model, feedback = minify(
model=model,
train_loader=train_loader, # Required
val_loader=val_loader, # Recommended for quantization
test_loader=test_loader, # Recommended for validation
epochs=3 # Enough for fine-tuning
)

4. Handle Errors Gracefully

def safe_compress(model, train_loader):
"""Safely compress with fallback strategies."""
strategies = [
{"compression_strength": 3, "quantization": True, "fine_tune": True},
{"compression_strength": 2, "quantization": True, "fine_tune": False},
{"compression_strength": 1, "quantization": False, "fine_tune": False},
]

for strategy in strategies:
compressed_model, feedback = minify(model, train_loader, **strategy)

if feedback['success']:
return compressed_model, feedback
else:
print(f"Strategy failed: {feedback['suggestion']}")

# All strategies failed, return original model
return model, feedback


Getting Help

If you encounter persistent errors:
    Check the error message and suggestion - Most errors include specific resolution steps
    Review documentation
    Enable debug mode - Provides detailed diagnostic information
    Contact support:
  • Email:  support@aiminify.com 
  • Include:
  • Error code and message
  • Full feedback dictionary (with debug_mode=True)
  • Model architecture (if possible)
  • AIminify version: pip show aiminify
  • Python environment: python --version, pip list


Error Code Quick Reference

Error Code
Description
Severity
Typical Resolution
ENV_ERROR
Missing system dependencies
High
Install missing components or disable features
DEP_ERROR
Package version issues
High
Upgrade packages
MODEL_ERROR
Model incompatibility
High
Modify model architecture or disable incompatible features
COMPRESSION_ERROR
Compression stage failure
Medium
Reduce compression strength or disable failing stage
LICENSE_ERROR
License validation failed
High
Run aiminify configure or contact support
VALIDATION_ERROR
Invalid parameters
Low
Check parameter values and types
UNEXPECTED_ERROR
Unhandled exception
High
Enable debug mode and contact support


Appendix: Example Error Scenarios

Scenario 1: Model with Dynamic Control Flow

Error:
{
"error_code": "MODEL_ERROR",
"error_message": "Model contains dynamic control flow that cannot be traced"
}
Solution:
# Before (causes error)
class BadModel(nn.Module):
def forward(self, x):
if x.sum() > 0: # Dynamic control flow
return self.layer1(x)
else:
return self.layer2(x)

# After (works)
class GoodModel(nn.Module):
def forward(self, x):
mask = (x.sum() > 0).float()
return mask * self.layer1(x) + (1 - mask) * self.layer2(x)
# Or use: torch.where(x.sum() > 0, self.layer1(x), self.layer2(x))

Scenario 2: Missing Calibration Data for Quantization

Error:
{
"error_code": "COMPRESSION_ERROR",
"stage": "quantization",
"error_message": "INT8 calibration requires validation data"
}
Solution:
# Provide validation data
compressed_model, feedback = minify(
model=model,
train_loader=train_loader,
val_loader=val_loader, # Add this for quantization calibration
compression_strength=4
)

Scenario 3: TensorRT Not Installed

Error:
{
"error_code": "ENV_ERROR",
"error_message": "TensorRT not available for quantization"
}
Solution:
# Option 1: Install TensorRT
pip install aiminify[tensorrt]

# Option 2: Disable quantization in code
compressed_model, feedback = minify(
model=model,
train_loader=train_loader,
quantization=False # Skip TensorRT-dependent quantization
)