This guide explains how to install, configure, and run the PLATCOM platform. It follows the structure and styling of the official PLATCOM project page while presenting technical instructions clearly.
The package includes all scripts, configuration files, and setup utilities.
Note: The PLATCOM package is protected with basic authentication. If you would like to test PLATCOM, please email us at info@rinnoco.com to receive your access credentials.
PLATCOM uses machine learning to predict the optimal compression algorithm for your database columns. Instead of manually testing different compression schemes, PLATCOM analyzes your data's statistical characteristics and recommends the best compression for each column.
Supported compression schemes: GZIP, ZSTD, LZMA, BZIP2
After extracting PLATCOM_v[version].zip, you'll see:
Compiled binaries (setup and deployment)
CSV data files (downloaded during install)
Helper scripts for import, cleanup, CSV fixing
Run first! Downloads data, sets up database, installs dependencies
Main runner for training (--setup) and prediction (--deploy)
Updates to newer version (creates backup first)
Database connection settings for setup (training)
Database connection settings for deployment (prediction)
PostgreSQL container configuration
Database schema (tables structure)
Documentation for CSV file mode (no database required)
Current version information
Tracks installation progress (allows resuming if interrupted)
Downloads data, sets up PostgreSQL, installs dependencies
bash install_project.sh
Analyzes your data and trains ML models
bash run_platcom.sh --setup
Output: output_[version]/output/
Report: results_[version]/setup_report.txt
Uses trained models to predict optimal compression
bash run_platcom.sh --deploy
Output: output_[version]/output_deployment/
Report: results_[version]/deploy_report.txt
results_[version]/ folder for human-readable
reports.
After running setup and deployment, these folders are created:
output_[version]/ | +-- output/ ← Created by --setup | +-- tables_statistics.csv Column stats from all tables | +-- best_results.csv Best compression per table | +-- decision_trees/ Trained ML models (SUM/AVG/MIN/MAX) | +-- [table_name]/ Per-table analysis files | +-- output_deployment/ ← Created by --deploy | +-- deployment_labels/ | | +-- [table_name]/ | | +-- labels Predicted compression (one per column) | +-- deployment_aggregated_stats/ results_[version]/ ← Human-readable reports +-- setup_report.txt Training summary & statistics +-- deploy_report.txt Prediction results & recommendations
results_[version]/ are the most useful
outputs - they summarize everything in plain English.
| Feature | Database Mode (Default) | File Mode |
|---|---|---|
| Data Source | PostgreSQL database | CSV files in data/ |
| Requires Docker | Yes | No |
| Primary/Foreign Keys | Detected from schema | Not available |
| Best For | Production, full accuracy | Quick testing, no Docker setups |
| Install Command | bash install_project.sh |
bash install_project.sh --file-mode |
| Run Command | bash run_platcom.sh --setup |
bash run_platcom.sh --file-mode --setup |
FILE_MODE.md for details.
| Command | Description |
|---|---|
bash install_project.sh |
Full install: download data + setup PostgreSQL |
bash install_project.sh --file-mode |
Install for CSV mode only (no Docker) |
bash install_project.sh --skip-download |
Skip data download (use existing data/) |
bash install_project.sh --force-download |
Re-download data even if exists |
bash install_project.sh --help |
Show all options |
| Command | Description |
|---|---|
bash run_platcom.sh --setup |
Train models (required first) |
bash run_platcom.sh --deploy |
Generate predictions |
bash run_platcom.sh --file-mode --setup |
Train using CSV files |
bash run_platcom.sh --file-mode --deploy |
Predict using CSV files |
bash run_platcom.sh --clean-setup |
Remove training artifacts |
bash run_platcom.sh --clean-deployment |
Remove deployment artifacts |
bash run_platcom.sh --reset-project |
Full reset (removes everything) |
bash run_platcom.sh --help |
Show all options |
To update to a newer version:
bash update_platcom.sh
PLATCOM_backup_[timestamp]/
data/ folder and results are kept
db_config.txt and deployment_config.txt
settings are kept
If you need to go back to a previous version:
# List backups
ls -la PLATCOM_backup_*
# Restore from backup
rm -rf bins scripts *.sh
cp -r PLATCOM_backup_[timestamp]/* .
Run this first. It analyzes your data and builds ML models.
Output: output_[version]/output/
Report: results_[version]/setup_report.txt
Time: 5-30 minutes depending on data size
Run after setup. Uses trained models to predict.
Output: output_[version]/output_deployment/
Report: results_[version]/deploy_report.txt
Time: 1-5 minutes
Predictions use numeric labels (0-3):
| Label | Scheme | Best For |
|---|---|---|
| 0 | GZIP | Compatibility, web APIs, general purpose |
| 1 | ZSTD | Best ratio + speed balance, modern systems |
| 2 | LZMA | Maximum compression, archival |
| 3 | BZIP2 | High compression, batch jobs |
Edit these files to customize behavior:
# Database connection
dbname=postgres
user=postgres
password=ChangeMe!123
hostaddr=127.0.0.1
port=5432
# How many rows to analyze (more = slower but more accurate)
limit=LIMIT 5000
# Tables to analyze (comma-separated)
table_names=measurements_basic,measurements_dust,battery,...
# File mode (set to true for CSV mode)
file_mode=false
csv_directory=data/
Same format as db_config.txt. Keep settings consistent between both.
limit=LIMIT 5000 for testing, limit=LIMIT
50000 or empty for production.
# Check Docker is running
docker ps
# If not, start Docker Desktop (Windows) or:
sudo systemctl start docker # Linux
Some columns have invalid data. This is handled automatically, but if it persists:
rm -rf output_*/
bash run_platcom.sh --setup
Corrupted training data. Clean and re-run:
rm -rf output_*/
bash run_platcom.sh --setup
chmod +x *.sh scripts/*.sh
The fix_csvs.py script handles most issues. If problems persist, check your CSV format: