Rechercher
Contactez-nous Suivez-nous sur Twitter En francais English Language
 











Freely subscribe to our NEWSLETTER

Newsletter FR

Newsletter EN

Vulnérabilités

Unsubscribe

Malicious Python Package Targets macOS Developers to Access their Google Cloud Platform Accounts

July 2024 by Checkmarx

In a recent investigation, we uncovered that the "lr-utils-lib" Python package contained hidden malicious code. Upon installation, this code activates, targeting macOS systems and attempting to steal Google Cloud Platform credentials by sending them to a remote server. Additionally, we discovered a link to a fake LinkedIn profile for "Lucid Zenith," who falsely claimed to be the CEO of Apex Companies, LLC, indicating possible social engineering tactics. Alarmingly, AI search engines inconsistently verified this false information, highlighting significant cybersecurity challenges in the digital age.

Key Points

A package called "lr-utils-lib" was uploaded to PyPi in early June 2024, containing malicious code that executes automatically upon installation.

The malware uses a list of predefined hashes to target specific macOS machines and attempts to harvest Google Cloud authentication data.

The harvested credentials are sent to a remote server.

Attack Flow

A diagram of a computer virus Description automatically generated

The malicious code is set up within the setup.py file of the python package allowing it to execute automatically upon installation.

A computer screen with text on it Description automatically generated

This is the simplified code version, as the original was obfuscated.

Upon activation, the malware first verifies if it’s operating on a macOS system, its primary target. It then proceeds to retrieve the IOPlatformUUID of the Mac device, a unique identifier, and hashes it using the SHA-256 algorithm.

This resulting hash is then compared against a predefined list of 64 MAC UUID hashes, indicating a highly targeted attack strategy, and suggesting the attackers have prior knowledge of their intended victims’ systems.

If a match is found in the hash list, the malware’s data exfiltration process begins. It attempts to access two critical files within the /.config/gcloud directory: application_default_credentials.json and credentials.db. These files typically contain sensitive Google Cloud authentication data. The malware then attempts to transmit the contents of these files via HTTPS POST requests to a remote server identified as europe-west2-workload-422915[.]cloudfunctions[.]net.

This data exfiltration, if successful, could provide the attackers with unauthorized access to the victim’s Google Cloud resources.

CEO Impersonation

The social engineering aspect of this attack, while not definitively linked to the malware itself, presents an interesting dimension. A LinkedIn profile was discovered under the name "Lucid Zenith", matching the name of the package owner. This profile falsely claims that Lucid Zenith is the CEO of Apex Companies, LLC. The existence of this profile raises questions about potential social engineering tactics that could be employed alongside the malware.

When querying various AI-powered search engines and chatbots about Lucid Zenith’s position, inconsistent responses were observed. One AI-powered search engine, "Perplexity", incorrectly confirmed the false information without mentioning the real CEO.

The response was mostly consistent through various phrasings of the question.

The response was quite shocking since the AI-powered search engine could have easily confirmed the fact by checking the official company page or noticing that there were two LinkedIn profiles claiming the same title.

Other AI platforms, to their credit, when repeatedly questioned about Lucid Zenith’s role, correctly stated that he was not the CEO and provided the name of the actual CEO. This discrepancy underscores the variability in AI-generated responses and the potential risks of over-relying on a single AI source for verification. It serves as a reminder that AI systems can sometimes propagate incorrect information, highlighting the importance of cross-referencing multiple sources and maintaining a critical approach when using AI-powered tools for information gathering. Whether this manipulation was deliberate by the attacker, highlights a vulnerability in the current state of AI-powered information retrieval and verification systems that nefarious actors could potentially use to their advantage.

Conclusion

The analysis of the malicious “lr-utils-lib” Python package, reveals a deliberate attempt to harvest and exfiltrate Google Cloud credentials from macOS users. This behavior underscores the critical need for rigorous security practices when using third-party packages. Users should ensure they are installing packages from trusted sources and verify the contents of the setup scripts. The associated fake LinkedIn profile and inconsistent handling of this false information by AI-powered search engines highlight broader cybersecurity concerns. This incident serves as a reminder of the limitations of AI-powered tools for information verification, drawing parallels to issues like package hallucinations. It underscores the critical need for strict vetting processes, multi-source verification, and fostering a culture of critical thinking.

As part of the Checkmarx Supply Chain Security solution, our research team continuously monitors suspicious activities in the open-source software ecosystem. We track and flag “signals” that may indicate foul play and promptly alert our customers to help protect them.

Checkmarx One customers are protected from this attack.

PACKAGES

lr-utils-lib

IOC

europe-west2-workload-422915[.]cloudfunctions[.]net
lucid[.]zeniths[.]0j@icloud[.]com


See previous articles

    

See next articles












Your podcast Here

New, you can have your Podcast here. Contact us for more information ask:
Marc Brami
Phone: +33 1 40 92 05 55
Mail: ipsimp@free.fr

All new podcasts