\\
!!Amazon Glacier
\\
 Amazon Glacier is a cloud-based storage service designed for long-term data archiving and backup. It offers secure, durable, and low-cost storage for infrequently accessed data. For more information, visit: [Amazon Glacier Link|https://docs.aws.amazon.com/glacier/index.html#lang/en_us].\\
---- 
 __⚠️ Important__: Dependency jar files must be downloaded and placed in your __CrushFTP Install Folder/plugins/ lib__. ⚠️ Restart is required to load the new Glacier dependency jar. [ Download Link|aws-java-sdk.jar]\\  
----
__⚠️ Proxy Configuration__: If your server accesses the internet through a proxy, ensure that the __Glacier__ domains are whitelisted.\\
----
\\
The __URL__ should look like this (replace the placeholders with your actual values):\\
{{{
glacier://{ACCESS_KEY_ID}:{SECRET_KEY_ID}@glacier.{REGION}.amazonaws.com/
}}}\\
Make sure to insert your Access Key, Secret key, and AWS region to form a valid connection URL for Glacier.\\
\\
[attachments|glacier_vfs.png]\\
\\
Select the proper region form the Server combobox. The default region is : [us-east-1]\\
Give the Vault name at Vault Name field or you can leave it empty and it will list all the Vaults you have on the given region. Upload is only allowed under a Vault folder.  We hold a special "glacier" folder on the CrushFTP server which has the folder structure simulated, and "file" items which are XML pointers to the real glacier archive data. Each archive will have the following archive description:\\
{{{
<m><v>4</v><p>[Base64 encoded path]</p><lm>[the current date]</lm></m>
}}}
You can turn off the xml reference store by checking the "Delete local representation after upload" flag. It will delete the xml pointer one second after the upload.\\
\\
!!! Glacier task
\\
If you already have archives in Glacier that were ⚠️ __not uploaded through CrushFTP__, you can use this task to rebuild the simulated folder and file structure (XML Pointers) that CrushFTP uses.\\
This process happens in two steps: first, it creates an Amazon Glacier Inventory retrieval job. Once this job is completed (typically in 3–5 hours), it downloads the inventory and uses it to generate CrushFTP’s simulated folder structure and file references.\\
For more details, see: [Amazon Valult Inventory Link|https://docs.aws.amazon.com/amazonglacier/latest/dev/vault-inventory.html]\\
\\
[attachments|glacier_task.png]\\
\\
⚠️ The CrushTask must be run at __least twice__:\\
 \\
__The first run__: It creates the Amazon inventory job, and the job ID returned by Amazon will be stored in the __glacier_info.XML__ file located in the Cache folder.\\
By default, this Cache folder points to the CrushFTP job folder, which can be found or customized in the task settings. This file is used later to track and complete the inventory retrieval process.\\
{{{
<?xml version="1.0" encoding="UTF-8"?>
<GlacierTask type="properties">
        <job_id>Amazon job id</job_id>
</GlacierTask>
}}}\\
\\
__The second run__: It checks the status of the Amazon job and downloads the inventory once the job is finished. If the __glacier_info.XML__ file exists, the task uses the stored __Amazon job ID__ to check the current status of the job.\\
You can set up an __Email task__ after the Glacier task to notify the job result, using the Amazon job status variable.\\
Possible values for the job status are: __In progress__, __Failed__, or __Succeeded__.\\
\\__
{{{
{glacier_job_satus}
}}}\\
\\
Once the Amazon job status is Succeeded, it downloads the Glacier Vault Inventory and creates the CrushFTP's glacier folder and file(XML pointers to your archive data) structures based on glacier inventory.
The archive description should have the following format:
{{{
<m><v>...</v><p>[Base64 encoded path]</p> ....</m>
}}}
If your glacier archive descriptions does not have the format like above, it will creates just the XML pointers with archive description as file name.