Large Language Models Security

sAIfer Lab develops novel rigorous testing and validation measures for assessing the robustness of LLMs before their deployment in sensitive applications.

Generative models, particularly Large Language Models (LLMs), have recently gained considerable attention and popularity due to notable advancements and extensive media coverage, largely driven by the success of commercial products. These models are extensively trained on large-scale corpora datasets, which have unveiled their unique and remarkable capabilities in processing and generating diverse media content, often exhibiting human-like capabilities.

However, one significant concern in LLMs' utilization and deployment in real-world systems is their susceptibility to small perturbations in input data. Research has shown that even minor changes, such as plausible typos or synonyms, can lead the model to data leaks, copyright infringement, or the generation of factually incorrect, biased, or harmful content.

As a consequence, these attacks can lead to severe business consequences, including loss of reputation, performance degradation, user harm, toxic content, targeted manipulation, and legal and ethical ramifications. Therefore, given their widespread adoption, PraLab and SmartLab are developing novel rigorous testing and validation measures for assessing the robustness of LLMs before their deployment in sensitive applications.

 

sAIfer Lab

Quick Links

Contact Us

PRA LAB:
Via Marengo, 3 - 09123, Cagliari - Italy

SMARTLAB:
Via Opera Pia 11A, 16145, Genoa - Italy

Affiliations