Global Transparency Initiative update, April 2024

Expanding the global transparency initiative by opening the Istanbul Transparency Center and launching the Transparency Lab in collaboration with Boğazi University

An evidence-based approach to security assessments of IT products is a powerful tool that allows accurate assessment of the reliability of these products. That’s why we’ve been rolling out our global transparency initiative since our launch in 2018. On April 30, we opened our 12th Transparency Center – this time in Istanbul, Turkey, where our partners and clients, as well as cybersecurity regulators, can learn more about our solutions, and our product source code, software updates. , and review risk detection rules. Additionally, visitors can check the results of independent audits of our products and access the list of software components – the Software Bill of Materials (SBOM).

In addition, we signed a memorandum of understanding between Kaspersky and Boğaziçi University, a prominent public university in Istanbul, opening the new Transparency Center. It was signed by Kaspersky CEO Eugene Kaspersky and Boğaziçi University Rector Prof. Dr. Mehmet Naci İnci, and its main purpose is to establish a framework for mutual technical cooperation in future educational programs.

2024 GTI Milestones

It’s been over a year since our last Global Transparency Initiative update on our Kaspersky Daily blog. So we decided to highlight the GTI milestone of the year 2024 in this post.

Two new transparency centers – one in Africa and one in the Middle East

In 2023, we opened two new transparency centers. The first was opened in Riyadh, the capital of Saudi Arabia, and the second – in Kigali, the capital of Rwanda. Both Transparency Centers are number one in their regions (Middle East and Africa respectively).

Proposing ethical principles for artificial intelligence development and use in cybersecurity

To apply AI in cybersecurity without negative consequences, we proposed that the industry adopt a set of AI ethical principles. Briefly, here they are:

Transparency (consumers have the right to know whether a security provider uses AI systems and, if so, how these systems make decisions and for what purposes);
Security (AI developers need to prioritize flexibility and security)
Human control (results and performance of machine learning systems should be continuously monitored by experts)
Privacy (developers need to take steps to protect individuals’ privacy rights)
Designed for Cybersecurity (AI in information security should be used purely for defensive purposes)

Passing the SOC 2 Type 2 audit

In June 2023, we passed a Service Organization Control for Service Organizations (SOC 2) audit, which analyzed the company’s internal operating controls over a six-month period. The audit was conducted by a team of accountants from an independent service auditor. As a result of the audit, it was concluded that Kaspersky’s internal controls are effective to ensure regular automatic antivirus database updates, while protecting the process of developing and implementing the antivirus database from tampering. has been kept

Releasing regular transparency reports

Every six months, we issue a regular report on the requests we receive from governments and law enforcement agencies. The latest report details the applications for the second half of 2023. During this period, 63 applications were received from governments and agencies based in five countries. More than a third of applications were rejected due to missing data or not meeting legal verification requirements. We shared a brief report on requests from our customers to remove personal information, provide stored information, as well as requests to know what information is stored and where.

To learn more about our Global Transparency Initiative, or to request a tour of the Transparency Center, please visit our new interactive website for the project, which shows how GTI has evolved since its inception. How has progress been made?

Leave a Comment