AIminify provides comprehensive error handling with clear error messages and actionable suggestions to help you quickly diagnose and resolve issues. All errors follow a consistent format and include specific error codes for easy identification.
{
"success" : False ,
"error_code" : "ERROR_CODE" ,
"error_message" : "Detailed description of what went wrong" ,
"suggestion" : "Actionable steps to resolve the issue" ,
"stage" : "compression_stage" ,
"recoverable" : True / False ,
"error_type" : "ExceptionClassName"
}
: System requirements or dependencies are not met.
:
CUDA toolkit not installed TensorRT not available PyTorch or TensorFlow not installed Incompatible package versions :
{
"error_code" : "ENV_ERROR" ,
"error_message" : "CUDA toolkit not found on system" ,
"suggestion" : "Install CUDA toolkit or use CPU-only mode by setting quantization=False" ,
"recoverable" : False
}
:
Install the missing component:
CUDA: Download from NVIDIA website TensorRT: pip install aiminify[tensorrt] PyTorch: pip install torch torchvision TensorFlow: pip install tensorflow Alternatively, disable features requiring the missing component:
Set quantization=False to skip CUDA-dependent quantization Use CPU-only mode : Required packages are missing or have incompatible versions.
:
Package not installed Package version too old Conflicting package versions :
{
"error_code" : "DEP_ERROR" ,
"error_message" : "torch version 1.8.0 is below required minimum 2.0.0" ,
"package_name" : "torch" ,
"required_version" : "2.0.0" ,
"suggestion" : "Install or upgrade torch to version 2.0.0 or higher"
}
:
Upgrade the package: pip install --upgrade <package_name>
Check compatibility with your Python version
Review AIminify's requirements: pip show aiminify
: Issues with model structure, type, or compatibility with compression techniques.
:
Model is not FX-traceable (PyTorch) Model architecture incompatible with quantization Model architecture incompatible with pruning Invalid model type (not a PyTorch or TensorFlow model) :
{
"error_code" : "MODEL_ERROR" ,
"error_message" : "Model is not FX-traceable: dynamic control flow detected" ,
"suggestion" : "Try using a simpler model architecture or check PyTorch FX compatibility" ,
"recoverable" : False
}
:
:
Avoid dynamic control flow (if/else based on tensor values) Use torch.where() instead of conditional statements Simplify complex model architectures Check PyTorch FX documentation :
Provide validation data for calibration Try disabling quantization: quantization=False Use compression_strength levels 0-2 :
Reduce compression_strength (0-2 for lighter pruning) Disable pruning by using lower compression levels Check model architecture for unsupported layer types : The compression process failed during a specific stage.
:
Pruning stage failures Quantization stage failures Fine-tuning stage failures :
{
"error_code" : "COMPRESSION_ERROR" ,
"error_message" : "Quantization failed: TensorRT engine build error" ,
"stage" : "quantization" ,
"suggestion" : "Provide validation data for calibration or disable quantization" ,
"recoverable" : True
}
:
:
Reduce compression_strength to a lower level Ensure training data is provided if using smart pruning Check that model layers are supported for pruning :
Provide calibration data via val_loader or val_dataset Disable quantization: quantization=False Check TensorRT compatibility with your model architecture Verify ONNX export compatibility :
Verify training and validation data format Check loss function compatibility Disable fine-tuning: fine_tune=False Review training hyperparameters : Issues with license validation or authentication.
:
License file not found Invalid license file format License expired Network error contacting license server :
{
"error_code" : "LICENSE_ERROR" ,
"error_message" : "License validation failed: license expired" ,
"suggestion" : "Run 'aiminify configure <username>' to set up your license or contact support@aiminify.com" ,
"recoverable" : False
}
:
Configure your license: aiminify configure <your_username>
Verify license file exists at ~/.aiminify/license.json
Check internet connection (required for validation)
: Invalid input parameters to the minify() function.
:
Invalid compression_strength value (must be 0-5) Invalid precision value (must be 'fp32' or 'mixed') Wrong parameter types Out-of-range values :
{
"error_code" : "VALIDATION_ERROR" ,
"error_message" : "compression_strength must be one of [0, 1, 2, 3, 4, 5], got 10" ,
"parameter_name" : "compression_strength" ,
"valid_values" : [ 0 , 1 , 2 , 3 , 4 , 5 ] ,
"suggestion" : "Valid values for compression_strength: [0, 1, 2, 3, 4, 5]"
}
:
:
Check the error message for the specific parameter and valid values
Ensure all parameters match the expected types
Review the API documentation for parameter details
For detailed error traces and diagnostic information:
compressed_model , feedback = minify (
model = model ,
train_loader = train_loader ,
debug_mode = True
)
:
Full exception stack traces Detailed logging at each compression stage Model architecture information Intermediate compression statistics Always check the feedback dictionary returned by minify() :
compressed_model , feedback = minify ( model , train_loader )
if not feedback [ 'success' ] :
print ( f"Error Code: { feedback [ 'error_code' ] } " )
print ( f"Error Message: { feedback [ 'error_message' ] } " )
print ( f"Suggestion: { feedback [ 'suggestion' ] } " )
if 'stage' in feedback :
print ( f"Failed at stage: { feedback [ 'stage' ] } " )
Try compression_strength=0 first, then gradually increase
minify ( model , train_loader , quantization = False )
minify ( model , train_loader , fine_tune = False )
minify ( model , train_loader , quantization = False , fine_tune = False )
Ensure data loaders return (input, target) tuples Check batch sizes are consistent Verify data types match model expectations
import torch
example_input = torch . randn ( 1 , 3 , 224 , 224 )
traced = torch . fx . symbolic_trace ( model )
import torch
print ( f"PyTorch version: { torch . __version__ } " )
print ( f"CUDA available: { torch . cuda . is_available ( ) } " )
try :
import tensorrt as trt
print ( f"TensorRT version: { trt . __version__ } " )
except ImportError :
print ( "TensorRT not installed (required for quantization)" )
compressed_model , feedback = minify (
model = model ,
train_loader = train_loader ,
compression_strength = 1 ,
quantization = False ,
fine_tune = False
)
compressed_model , feedback = minify (
model = model ,
train_loader = train_loader ,
val_loader = val_loader ,
test_loader = test_loader ,
epochs = 3
)
def safe_compress ( model , train_loader ) :
"""Safely compress with fallback strategies."""
strategies = [
{ "compression_strength" : 3 , "quantization" : True , "fine_tune" : True } ,
{ "compression_strength" : 2 , "quantization" : True , "fine_tune" : False } ,
{ "compression_strength" : 1 , "quantization" : False , "fine_tune" : False } ,
]
for strategy in strategies :
compressed_model , feedback = minify ( model , train_loader , ** strategy )
if feedback [ 'success' ] :
return compressed_model , feedback
else :
print ( f"Strategy failed: { feedback [ 'suggestion' ] } " )
return model , feedback
If you encounter persistent errors:
- Most errors include specific resolution steps
- Provides detailed diagnostic information
:
Email: support@aiminify.com Include: Error code and message Full feedback dictionary (with debug_mode=True ) Model architecture (if possible) AIminify version: pip show aiminify Python environment: python --version , pip list
Missing system dependencies
Install missing components or disable features
Modify model architecture or disable incompatible features
Compression stage failure
Reduce compression strength or disable failing stage
License validation failed
Run aiminify configure or contact support
Check parameter values and types
Enable debug mode and contact support
:
{
"error_code" : "MODEL_ERROR" ,
"error_message" : "Model contains dynamic control flow that cannot be traced"
}
:
class BadModel ( nn . Module ) :
def forward ( self , x ) :
if x . sum ( ) > 0 :
return self . layer1 ( x )
else :
return self . layer2 ( x )
class GoodModel ( nn . Module ) :
def forward ( self , x ) :
mask = ( x . sum ( ) > 0 ) . float ( )
return mask * self . layer1 ( x ) + ( 1 - mask ) * self . layer2 ( x )
:
{
"error_code" : "COMPRESSION_ERROR" ,
"stage" : "quantization" ,
"error_message" : "INT8 calibration requires validation data"
}
:
compressed_model , feedback = minify (
model = model ,
train_loader = train_loader ,
val_loader = val_loader ,
compression_strength = 4
)
:
{
"error_code" : "ENV_ERROR" ,
"error_message" : "TensorRT not available for quantization"
}
:
pip install aiminify [ tensorrt ]
compressed_model, feedback = minify (
model = model,
train_loader = train_loader,
quantization = False
)