N. Madrueño Sierro, A. Fernández-Isabel, R. R. Fernández, I. Martín de Diego

Text adversarial example generation is a powerful technique for identifying and analyzing the vulnerabilities present in natural language processing models. These adversarial attacks introduce subtle text perturbations that cause victim models to make incorrect predictions, while preserving the original semantic meaning from a human perspective. In this context, a novel method for generating adversarial text examples through the usage of large language models (LLMs) is presented. This approach utilizes the remarkable text generation capabilities of LLMs to modify the original text at different text levels.

Keywords: Adversarial Attack, Text Adversarial Example, Large Language Model, Natural Language Processing, Text Classification

Scheduled

Software II
June 10, 2025  3:30 PM
Sala VIP Jaume Morera i Galícia


Other papers in the same session

Assessing the Impact of Percentage of Missing Data and Imputation Methods on Youden Index Estimation

S. Sabroso-Lasa, L. M. Esteban Escaño, N. Malats, J. T. Alcalá Nalvaiz

CUtools: an R package for clinical utility analysis of predictive models

M. Escorihuela Sahún, L. M. Esteban Escaño, Á. Borque-Fernando, G. Sanz Sáiz


Cookie policy

We use cookies in order to be able to identify and authenticate you on the website. They are necessary for the correct functioning of it, and therefore they can not be disabled. If you continue browsing the website, you are agreeing with their acceptance, as well as our Privacy Policy.

Additionally, we use Google Analytics in order to analyze the website traffic. They also use cookies and you can accept or refuse them with the buttons below.

You can read more details about our Cookie Policy and our Privacy Policy.