Large language models (LLMs) have the potential to transform how software requirements are gathered, analysed and validated. This systematic review synthesises recent research on integrating LLMs into the requirement engineering process. We survey applications that leverage generative models to elicit requirements from stakeholders, classify and cluster user stories, and generate acceptance criteria and test cases.
The survey identifies challenges such as handling ambiguity and inconsistencies in natural language specifications, maintaining domain specificity and ensuring the privacy of sensitive project data. We discuss strategies for integrating LLMs into agile development workflows, emphasising human‑in‑the‑loop practices to ensure correctness and accountability. Finally, we outline open research directions, including interactive requirement refinement, multilingual support and metrics for evaluating requirement quality.