Preview

Key Issues of Contemporary Linguistics

Advanced search

Inclusive next-generation dialogue system: linguistic aspect

https://doi.org/10.18384/2949-5075-2025-1-67-82

Abstract

Aim. To identify strategies for developing language models using artificial intelligence to support the inclusion of people with mental disabilities.

Methodology. The study compares two approaches to building dialogue systems: information retrieval question-answering systems and generative question-answering. A collection of texts on inclusive education was compiled. Additionally, a complex of question-answering systems was created using neural network transfer learning methods to analyze the performance of the approaches. A linguistic analysis of the collected data and the results of the dialogue system was conducted.

Results. The study showed that both approaches to building dialogue systems have their advantages and limitations. Information retrieval question-answering systems provide high answer relevance. Generative models offer greater flexibility in a broader context. Linguistic analysis revealed that for optimal results, it is advisable to combine both approaches, leveraging the strengths of each depending on the specific task and interaction context.

Research implications. The significance lies in the development of dialogue system theory, deepening the understanding of the interaction between structural and semantic aspects of language and their impact on the effectiveness of different approaches to creating dialogue systems, as well as the possibility of applying the research results in the educational system.

About the Author

V. I. Firsanova
Saint-Petersburg State University
Russian Federation

Viktorya I. Firsanova – Postgraduate Student, Department of mathematical Linguistics

Saint Petersburg



References

1. World Report on Disability (2011). Geneva, Switzerland; Malta: World Health Organization.

2. Borisov, P. M. (2010). Verbal description destructive personality concept. In: Bulletin of the Moscow Region State University. Series: Linguistics, 2, 78–92 (in Russ.).

3. Leifler, E. (2022). Educational inclusion for students with neurodevelopmental conditions [dissertation]. Stockholm.

4. Eshghie, M. & Eshgie, M. (2023). ChatGPT as a Therapist Assistant: A Suitability Study. In: arXiv. URL: https://arxiv.org/abs/2304.09873 (accessed: 13.09.2023). DOI: 10.48550/arXiv.2304.09873.

5. Gillespie, A., Best, C. & O'Neill, B. (2012). Cognitive Function and Assistive Technology for Cognition: a Systematic Review. In: Journal of the International Neuropsychological Society, 18 (1), 1–19. DOI: 10.1017/S1355617711001548.

6. Ruder, S. (2019). Neural Transfer Learning for Natural Language Processing: A thesis submitted in partial fulfillment for the degree of Doctor of Philosophy. Galway.

7. Rajpurkar, P., Zhang, J., Lopyrev, K. & Liang, P. (2016). SQuAD: 100,000+ Questions for Machine Comprehension of Text. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Austin, Texas: Association for Computational Linguistics, pp. 2383–2392. DOI: 10.18653/v1/D16-1264.

8. Rajpurkar, P., Jia, R. & Liang, P. (2018). Know What You Don’t Know: Unanswerable Questions for SQuAD C. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Volume 2: Short Papers. Melbourne, Australia: Association for Computational Linguistics, pp. 784–789. DOI: 10.18653/v1/P18-2124.

9. Gao, J., Galley, M. & Li, L. (2019). Neural Approaches to Conversational AI: Question Answering, Task-Oriented Dialogues and Social Chatbots. In: Foundations and Trends® in Information Retrieval, 13 (2-3), 127–298. DOI: 10.1561/1500000074.

10. Hewitt, J. & Manning, P. D. (2019). A Structural Probe for Finding Syntax in Word Representations. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Volume 1. Long and Short Papers. Minneapolis, Minnesota: Association for Computational Linguistics, pp. 4129–4138. DOI: 10.18653/v1/N19-1419.

11. Chi, E. A., Hewitt, J. & Manning, P. D. (2020). Finding Universal Grammatical Relations in Multilingual BERT. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. URL: https://aclanthology.org/2020.acl-main.493.pdf (accessed: 13.09.2023). DOI: 10.18653/v1/2020.acl-main.493.

12. Maksimenko, O. I. (2014). Information retrieval systems: estimation by fuzzy logic. In: Bulletin of the Moscow Region State University. Series: Linguistics, 5, 45–52 (in Russ.).

13. Shwartz S. Artificial Intelligence 101. In: AI Perspectives. URL: https://www.aiperspectives.com/artificial-intelligence-101/ (accessed: 15.09.2023).

14. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Llion, J., Gomez, A. N., Kaiser, L. & Polosukhin, I. (2017). Attention is All You Need. In: NIPS'17: Proceedings of the 31st International Conference on Neural Information Processing Systems. Red Hook, New York: Curran Associates Inc., pp. 6000–6010. DOI: 10.5555/3295222.3295349.

15. Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L. & Stoyanov, V. (2020). Unsupervised Cross-Lingual Representation Learning at Scale. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. URL: https://aclanthology.org/2020.acl-main.747.pdf (accessed: 15.09.2023). DOI: 10.18653/v1/2020.acl-main.747.

16. Sousa, N., Oliveira, N. & Praça I. (2022). Machine Reading at Scale: A Search Engine for Scientific and Academic Research. In: Systems, 10 (2), 43. DOI: 10.3390/systems10020043.

17. Manning, C. D., Raghavan, P. & Schütze, H. (2009). An Introduction to Information Retrieval. Cambridge: Cambridge University Press.

18. Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D. et al. (2020). Language Models are Few-Shot Learners. In: NIPS'20: Proceedings of the 34th International Conference on Neural Information Processing Systems. Red Hook, New York: Curran Associates Inc., pp. 1877–1901. DOI: 10.48550/arXiv.2005.14165.

19. Klepalchenko, I. A. (2018). The Theory of Semantic Field as a Basis for the Declarative Knowledge Representation. In: Bulletin of Moscow Region State University. Series: Linguistics, 1, 18–24 (in Russ.). DOI: 10.18384/2310-712X-2018-1-18-24 (in Russ.).


Supplementary files

Review

Views: 72


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.


ISSN 2949-5059 (Print)
ISSN 2949-5075 (Online)