Application of Generative Artificial Intelligence in Requirements Specification
Keywords: Requirements Engineering; ISO 29148; LLM; RAG; Prompt Engineering; Design Science Research; Textual Quality of Requirements.
The Requirements Specification often presents ambiguity, inconsistency, and incompleteness, affecting aspects such as verifiability, clarity, and development costs. This work investigates how Large Language Models (LLMs), guided by Prompt Engineering, can support the generation of requirements specifications in natural language, according to the textual guidelines of ISO 29148:2018, using results from requirements elicitation as input. A requirements specification assistant is proposed, structured into four components: (i) initial requirements
generation by LLM; (ii) a few prompt examples with writings aligned to ISO 29148:2018 (definitions, sentence patterns, quality attributes); (iii) RAG (Retrieval-Augmented Generation) to retrieve and cite excerpts from the
standard and best practice guides during drafting; and (iv) iterative refinement through objective questions that reduce ambiguity, subjective terms, and scope gaps, always with the supervision of the requirements engineer. The methodology used follows Design Science Research (DSR), including demonstration and empirical evaluation: comparative experiments between the assistant and a reference LLM, measuring textual quality (clarity, completeness, consistency, verifiability) and the drafting effort, in addition to a survey with professionals about utility and risks (e.g., information security). The contributions are: (1) the operationalization of the writing recommendations of 29148 into automated checks; (2) a replicable protocol for using LLMs to support writing specifications; and (3) evidence on the benefits and limits of automated support. We conclude that the assistant does not replace the engineer; it acts as a co-pilot for drafting to produce clearer requirement texts aligned with the standard.