Sensitive Files Compression Inside A Container
editSensitive Files Compression Inside A Container
editIdentifies the use of a compression utility to collect known files containing sensitive information, such as credentials and system configurations inside a container.
Rule type: eql
Rule indices:
- logs-endpoint.events.process*
Severity: medium
Risk score: 47
Runs every: 5m
Searches indices from: now-9m (Date Math format, see also Additional look-back time
)
Maximum alerts per execution: 100
References: None
Tags:
- Domain: Container
- OS: Linux
- Use Case: Threat Detection
- Tactic: Credential Access
- Tactic: Collection
- Data Source: Elastic Defend
- Resources: Investigation Guide
Version: 2
Rule authors:
- Elastic
Rule license: Elastic License v2
Investigation guide
editTriage and analysis
Disclaimer: This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs.
Investigating Sensitive Files Compression Inside A Container
Containers are lightweight, portable environments used to run applications consistently across different systems. Adversaries may exploit compression utilities within containers to gather and exfiltrate sensitive files, such as credentials and configuration files. The detection rule identifies suspicious compression activities by monitoring for specific utilities and file paths, flagging potential unauthorized data collection attempts.
Possible investigation steps
- Review the process details to confirm the use of compression utilities such as zip, tar, gzip, hdiutil, or 7z within the container environment, focusing on the process.name and process.args fields.
- Examine the specific file paths listed in the process.args to determine if they include sensitive files like SSH keys, AWS credentials, or Docker configurations, which could indicate unauthorized data collection.
- Check the event.type field for "start" to verify the timing of the process initiation and correlate it with any known legitimate activities or scheduled tasks within the container.
- Investigate the user or service account under which the process was executed to assess whether it has the necessary permissions and if the activity aligns with expected behavior for that account.
- Look for any related alerts or logs that might indicate a broader pattern of suspicious activity within the same container or across other containers in the environment.
False positive analysis
- Routine backup operations may trigger the rule if they involve compressing sensitive files for storage. To handle this, identify and exclude backup processes or scripts that are known and trusted.
- Automated configuration management tools might compress configuration files as part of their normal operation. Exclude these tools by specifying their process names or paths in the exception list.
- Developers or system administrators might compress sensitive files during legitimate troubleshooting or maintenance activities. Establish a process to log and review these activities, and exclude them if they are verified as non-threatening.
- Continuous integration and deployment pipelines could involve compressing configuration files for deployment purposes. Identify these pipelines and exclude their associated processes to prevent false positives.
- Security tools that perform regular audits or scans might compress files for analysis. Ensure these tools are recognized and excluded from triggering the rule.
Response and remediation
- Immediately isolate the affected container to prevent further data exfiltration or unauthorized access. This can be done by stopping the container or disconnecting it from the network.
- Conduct a thorough review of the compressed files and their contents to assess the extent of sensitive data exposure. Focus on the specific file paths identified in the alert.
- Change credentials and keys that may have been compromised, including SSH keys, AWS credentials, and Docker configurations. Ensure that new credentials are distributed securely.
- Review and update access controls and permissions for sensitive files within containers to minimize exposure. Ensure that only necessary processes and users have access to these files.
- Implement monitoring and alerting for similar compression activities in other containers to detect potential threats early. Use the identified process names and arguments as indicators.
- Escalate the incident to the security operations team for further investigation and to determine if additional systems or data have been affected.
- Conduct a post-incident review to identify gaps in security controls and update container security policies to prevent recurrence.
Setup
editSetup
This rule requires data coming in from Elastic Defend.
Elastic Defend Integration Setup
Elastic Defend is integrated into the Elastic Agent using Fleet. Upon configuration, the integration allows the Elastic Agent to monitor events on your host and send data to the Elastic Security app.
Prerequisite Requirements:
- Fleet is required for Elastic Defend.
- To configure Fleet Server refer to the documentation.
The following steps should be executed in order to add the Elastic Defend integration on a Linux System:
- Go to the Kibana home page and click "Add integrations".
- In the query bar, search for "Elastic Defend" and select the integration to see more details about it.
- Click "Add Elastic Defend".
- Configure the integration name and optionally add a description.
- Select the type of environment you want to protect, either "Traditional Endpoints" or "Cloud Workloads".
- Select a configuration preset. Each preset comes with different default settings for Elastic Agent, you can further customize these later by configuring the Elastic Defend integration policy. Helper guide.
- We suggest selecting "Complete EDR (Endpoint Detection and Response)" as a configuration setting, that provides "All events; all preventions"
- Enter a name for the agent policy in "New agent policy name". If other agent policies already exist, you can click the "Existing hosts" tab and select an existing policy instead.
For more details on Elastic Agent configuration settings, refer to the helper guide. - Click "Save and Continue". - To complete the integration, select "Add Elastic Agent to your hosts" and continue to the next section to install the Elastic Agent on your hosts. For more details on Elastic Defend refer to the helper guide.
Rule query
editprocess where host.os.type == "linux" and event.type == "start" and event.action == "exec" and process.entry_leader.entry_meta.type == "container" and process.name in ("zip", "tar", "gzip", "hdiutil", "7z") and process.command_line like~ ( "*/root/.ssh/*", "*/home/*/.ssh/*", "*/root/.bash_history*", "*/etc/hosts*", "*/root/.aws/*", "*/home/*/.aws/*", "*/root/.docker/*", "*/home/*/.docker/*", "*/etc/group*", "*/etc/passwd*", "*/etc/shadow*", "*/etc/gshadow*" )
Framework: MITRE ATT&CKTM
-
Tactic:
- Name: Credential Access
- ID: TA0006
- Reference URL: https://attack.mitre.org/tactics/TA0006/
-
Technique:
- Name: Unsecured Credentials
- ID: T1552
- Reference URL: https://attack.mitre.org/techniques/T1552/
-
Sub-technique:
- Name: Credentials In Files
- ID: T1552.001
- Reference URL: https://attack.mitre.org/techniques/T1552/001/
-
Tactic:
- Name: Collection
- ID: TA0009
- Reference URL: https://attack.mitre.org/tactics/TA0009/
-
Technique:
- Name: Archive Collected Data
- ID: T1560
- Reference URL: https://attack.mitre.org/techniques/T1560/
-
Sub-technique:
- Name: Archive via Utility
- ID: T1560.001
- Reference URL: https://attack.mitre.org/techniques/T1560/001/