Current ThreatQ Version Filter

Troubleshooting

AGDS is designed to be restartable, meaning export and import operations can be safely rerun after interruption or failure without causing data corruption, duplication, or loss.

On the export side, restartability is achieved through the use of synchronization records and a configuration-based MD5 hash. Each export run creates a synchronization record at start time and only finalizes it when the run completes successfully. If an export fails, the record is left incomplete (finished_at remains NULL), and AGDS does not advance the incremental cursor. When the export is rerun with the same hash-affecting flags, ThreatQ automatically resumes from the last successfully completed export, ensuring that previously exported data is not duplicated and no changes are missed. 

On the import side, restartability is enabled through the use of temporary synchronization tables and deterministic record matching via sync_hash values. Imported data is first loaded into temporary tables, validated, and then merged into production tables as either updates or inserts. If an import fails, partially written data is safely handled—rerunning the import will update existing records rather than duplicating them, and no destructive operations are performed. The import process never deletes data from the target system, further reducing risk during retries.

This allows administrators to correct the underlying issue (such as memory limits, disk space, or configuration errors) and rerun the same command without manual cleanup or state resets. 

Before you Begin

You should confirm the following before proceeding to troubleshoot an AGDS issue:

  • Source and target are on the same ThreatQ version.
  • System time is set to UTC on both systems.
  • Kubernetes pods are healthy:
    kubectl get pods -n threatq

  • Sufficient disk space exists:
    df -h /var/lib/threatq/agds_transfer

  • You are using the correct user (non-root RKE2 user).

Troubleshooting Scenarios

Scenario Possible Causes What to Check
The manual export process fails Immediately and no tarball is created. 
  • Permission Issues
  • Invalid Flags
  • Insufficient Memory
  • Export console output
  • Larvel Logs
The manual export process completes but does not create the tarball. 
  • CSV generation error
  • The disk is full
  • File system permissions
  • Export console output
  • Laravel log
  • Disk space
Cron-based export runs but does not create the tarball. Additionally, there may be no log output.
  • The cron is being run by the wrong user
  • Cron environmental differences
  • Cron is missing the kubectl pathway
  • Cron logging enabled
  • Synchronization records
  • Cron user permissions
Incremental exports are re-exporting all data resulting in large exports every run and no reduction in data volume.
  • Changes have been made to the --target, --include-investigations, and/or --include-deleted flags
  • hash column in synchronizations table
  • Export flags consistency
Import process fails mid-way resulting in partial data appearing on the target instance.
  • Missing sync_hash initialization
  • Memory constraints
  • Data integrity issues
  • Import console output
  • Import sync report
  • Laravel logs
Import process completes but the target instance is missing data.
  • Export flags used (--sources, --include-investigations etc.)
  • Custom objects not installed on target instance 
  • Review Export and Import sync report files
  • Check synchronization table for the --include-investigations flag
  • If there are custom objects, confirm they have been installed on the target instance

AGDS Recovery Steps

If the AGDS process fails:

  1. Do not delete synchronization records.
  2. Fix the underlying issue (memory, disk, flags)
  3. Re-run the same command.
  4. Verify a new finished_at is populated in the synchronization table.