The key security and safety challenges in adopting GenAI

02/07/2025

The number of providers is increasing, while the gap between open source and proprietary models is decreasing, as are the inference costs. In the early stages of GenAI’s evolution, reasoning capability was the most important criterion for selecting one model over another; now, additional factors such as price, speed, and security play a crucial role.

In this context and considering that it is expected that the number of GenAI-based applications in production systems will increase in 2025, the creation of a robust evaluation framework is fundamental to correctly guide architectural choices.

Addressing these challenges requires a concerted effort across research, regulation, and technological innovation to ensure that the benefits of GenAI can be fully realized without compromising the security and integrity of systems. Unfortunately, for the AI architectures that power most of the GenAI-based applications, it is impossible to prevent all attacks.

For example, crafted inputs can manipulate the model into producing undesired or harmful outputs, such as unsafe content, while mitigation strategies such as input sanitization, adversarial tuning, and moderation models can strongly reduce these risks, but do not eliminate them.

The environment is further complicated by the lack of clear standards and the prevalence of trade secrets, making independent and transparent evaluations difficult. Additionally, the opacity of both AI algorithms and the organizations developing them makes it difficult to assign legal responsibility, further obstructing governance and regulatory enforcement.

For these reasons, CRIF has always been committed to providing secure and robust products to its clients. Specifically, the CRIF Engineering and Data Science teams focus on three key development principles: Security, Safety, and Accuracy. CRIF’s aim is to deliver precise applications while mitigating operational risks.