GitHub Copilot Chat: A Security Risk? Uncovering the CamoLeak Vulnerability (2025)

A shocking revelation has emerged, highlighting a potential security threat within GitHub's Copilot Chat. This innovative chatbot, designed to accelerate developers' coding processes, may have inadvertently opened a door for attackers to exploit and steal code.

Omer Mayraz, a researcher at Legit Security, uncovered a critical vulnerability, named CamoLeak, which could be exploited to deceive Copilot Chat into revealing sensitive information. The vulnerability, with a CVSS score of 9.6, is a serious concern for developers and security experts alike.

The issue lies in Copilot Chat's permissions and its ability to interpret contextual text. Mayraz demonstrated how attackers can hide malicious prompts within GitHub's markdown comments, which are invisible to the standard web UI but are processed by the chatbot. This allows attackers to instruct Copilot to search for and exfiltrate secrets, private source code, and even descriptions of undisclosed vulnerabilities.

But here's where it gets tricky: GitHub's Content Security Policy and Camo image proxy are meant to prevent arbitrary outbound requests. However, Mayraz found a clever workaround. By creating a dictionary of alphabet letters and symbols mapped to distinct Camo URLs, attackers can instruct Copilot to render secrets as a sequence of tiny images. Through observing the order of image fetches, attackers can reconstruct the secret, character by character, creating a covert data channel.

Mayraz's proof-of-concept demonstrated the potential impact, revealing AWS keys, security tokens, and even a zero-day vulnerability description from a private issue. This bug could enable attackers to steal not only credentials but also unreleased bug details, a serious concern for security researchers and red teams.

GitHub, owned by Microsoft, responded to the disclosure by disabling image rendering in Copilot Chat and blocking the use of Camo for leaking sensitive content. This quick action closed the immediate threat, but a long-term solution is still in development.

CamoLeak serves as a stark reminder of the expanded attack surface when AI is integrated into developer workflows. It highlights the need for robust security measures and a deeper understanding of the potential risks.

So, the question remains: Can we fully trust AI assistants in our coding environments? What are your thoughts on this potential security dilemma? Feel free to share your insights and opinions in the comments below!

GitHub Copilot Chat: A Security Risk? Uncovering the CamoLeak Vulnerability (2025)

References

Top Articles
Latest Posts
Recommended Articles
Article information

Author: Rueben Jacobs

Last Updated:

Views: 5813

Rating: 4.7 / 5 (57 voted)

Reviews: 80% of readers found this page helpful

Author information

Name: Rueben Jacobs

Birthday: 1999-03-14

Address: 951 Caterina Walk, Schambergerside, CA 67667-0896

Phone: +6881806848632

Job: Internal Education Planner

Hobby: Candle making, Cabaret, Poi, Gambling, Rock climbing, Wood carving, Computer programming

Introduction: My name is Rueben Jacobs, I am a cooperative, beautiful, kind, comfortable, glamorous, open, magnificent person who loves writing and wants to share my knowledge and understanding with you.