<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.3 20210610//EN" "JATS-journalpublishing1-3.dtd">
<article article-type="research-article" dtd-version="1.3" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xml:lang="ru"><front><journal-meta><journal-id journal-id-type="publisher-id">linmgou</journal-id><journal-title-group><journal-title xml:lang="ru">Вопросы современной лингвистики</journal-title><trans-title-group xml:lang="en"><trans-title>Key Issues of Contemporary Linguistics</trans-title></trans-title-group></journal-title-group><issn pub-type="ppub">2949-5059</issn><issn pub-type="epub">2949-5075</issn><publisher><publisher-name>Federal State University of Education</publisher-name></publisher></journal-meta><article-meta><article-id pub-id-type="doi">10.18384/2949-5075-2025-4-6-15</article-id><article-id custom-type="elpub" pub-id-type="custom">linmgou-2008</article-id><article-categories><subj-group subj-group-type="heading"><subject>Research Article</subject></subj-group><subj-group subj-group-type="section-heading" xml:lang="ru"><subject>ТЕОРЕТИЧЕСКАЯ, ПРИКЛАДНАЯ И СРАВНИТЕЛЬНО-СОПОСТАВИТЕЛЬНАЯ ЛИНГВИСТИКА</subject></subj-group><subj-group subj-group-type="section-heading" xml:lang="en"><subject>THEORETICAL, APPLIED AND COMPARATIVE LINGUISTICS</subject></subj-group></article-categories><title-group><article-title>Моделирование лингвокреативных стратегий в генеративных языковых системах:трансформация интенциональных девиаций в дискурсивные паттерны</article-title><trans-title-group xml:lang="en"><trans-title>Modeling linguo-creative strategies in generative language systems: transformation of intentional deviations into discursive patterns</trans-title></trans-title-group></title-group><contrib-group><contrib contrib-type="author" corresp="yes"><contrib-id contrib-id-type="orcid">https://orcid.org/0000-0003-2773-5384</contrib-id><name-alternatives><name name-style="eastern" xml:lang="ru"><surname>Акай</surname><given-names>О. М.</given-names></name><name name-style="western" xml:lang="en"><surname>Akay</surname><given-names>O. M.</given-names></name></name-alternatives><bio xml:lang="ru"><p>Акай Оксана Михайловна (г. Санкт-Петербург) – доктор филологических наук, профессор кафедры иностранных языков в сфере экономики и права</p></bio><bio xml:lang="en"><p>Oksana M. Akay (St.  Petersburg) – Dr.  Sci. (Philology), Prof., Department of Foreign Languages in Economics and Law</p></bio><email xlink:type="simple">o.akay@spbu.ru</email><xref ref-type="aff" rid="aff-1"/></contrib></contrib-group><aff-alternatives id="aff-1"><aff xml:lang="ru">Санкт-Петербургский государственный университет<country>Россия</country></aff><aff xml:lang="en">Saint-Petersburg State University<country>Russian Federation</country></aff></aff-alternatives><pub-date pub-type="collection"><year>2025</year></pub-date><pub-date pub-type="epub"><day>20</day><month>11</month><year>2025</year></pub-date><volume>0</volume><issue>4</issue><fpage>6</fpage><lpage>15</lpage><permissions><copyright-statement>Copyright &amp;#x00A9; Акай О.М., 2025</copyright-statement><copyright-year>2025</copyright-year><copyright-holder xml:lang="ru">Акай О.М.</copyright-holder><copyright-holder xml:lang="en">Akay O.M.</copyright-holder><license license-type="creative-commons-attribution" xlink:href="https://creativecommons.org/licenses/by/4.0/" xlink:type="simple"><license-p>This work is licensed under a Creative Commons Attribution 4.0 License.</license-p></license></permissions><self-uri xlink:href="https://www.linguamgou.ru/jour/article/view/2008">https://www.linguamgou.ru/jour/article/view/2008</self-uri><abstract><p>Цель. Настоящее исследование направлено на выявление механизмов обработки интенциональных языковых девиаций большими языковыми моделями (LLM) и анализ их лингво-креативных стратегий в цифровом дискурсе. Цель работы заключается в разработке теоретической модели, объясняющей когнитивные алгоритмы распознавания и трансформации девиаций в системах искусственного интеллекта.Процедура и методы. В качестве методологической базы использован комплексный подход, включающий корпусный анализ диахронического среза цифрового дискурса (2019–2024 гг.), экспериментальные промпты с контролируемыми девиациями для моделей GPT-4, Gemini 1.5 и Claude 3, а также дискурс-анализ речевых актов ИИ с применением трёхуровневой шкалы аннотирования (репликация/амплификация/нормализация).Результаты исследования подтвердили гипотезу о статистической природе лингвокреативности LLM, выявив трёхступенчатую модель обработки девиаций: распознавание через механизмы внимания, классификация по степени отклонения от нормы, стратегический выбор ответной реакции. Установлен парадокс «креативного конформизма», проявляющийся в тенденции ИИ к гипернормализации изначально маргинальных языковых инноваций. Особый практический интерес представляют документированные эффекты циркуляции ИИ-генерированных неологизмов в социальных медиа и формирования «искусственного языкового вкуса».Теоретическая значимость работы заключается в развитии аппарата когнитивной лингвистики цифрового дискурса и уточнении онтологии интенциональных девиаций. Практическая ценность связана с приложениями в области разработки NLP-систем, цифровой лингводидактики и прогнозирования языковых изменений. Полученные данные открывают перспективы для дальнейшего изучения культурно-специфичных девиаций в многоязычных моделях и разработки метрик оценки лингвокреативного потенциала ИИ. </p></abstract><trans-abstract xml:lang="en"><p>Aim. The present research aims to identify the mechanisms of processing intensional linguistic deviations by large language models (LLMs) and to analyse their linguocreative strategies in digital discourse. The aim of the work is to develop a theoretical model explaining cognitive algorithms for recognising and transforming deviations in artificial intelligence systems.Methodology. An integrated approach including corpus analysis of a diachronic slice of digital discourse (2019–2024), experimental prompts with controlled deviations for GPT-4, Gemini 1.5 and Claude 3 models, and discourse analysis of AI speech acts using a three-level annotation scale (replication/amplification/normalisation) was used as a methodological framework.Results. The results of the research supported the hypothesis of the statistical nature of LLM linguocreativity, revealing a three-stage model of deviation processing: recognition through attention mechanisms, classification by degree of deviation from the norm, and strategic choice of response. The paradox of ‘creative conformism’ is established, manifested in AI's tendency to hypernormalise initially marginal linguistic innovations. Of particular practical interest are the documented effects of circulation of AI-generated neologisms in social media and the formation of ‘artificial linguistic taste’.Research implications. The theoretical significance of the work lies in the development of the apparatus of cognitive linguistics of digital discourse and clarification of the ontology of intensional deviations. Practical value is related to applications in the field of NLP-systems development, digital linguodidactics and prediction of language changes. The data obtained opens prospects for further studying culturally specific deviations in multilingual models and developing metrics for assessing the linguocreative potential of AI.</p></trans-abstract><kwd-group xml:lang="ru"><kwd>генеративные языковые модели</kwd><kwd>гипернормализация</kwd><kwd>интенциональные девиации</kwd><kwd>лингвокреативность</kwd><kwd>обработка естественного языка</kwd><kwd>цифровой дискурс</kwd></kwd-group><kwd-group xml:lang="en"><kwd>generative language models</kwd><kwd>hypernormalisation</kwd><kwd>intensional deviations</kwd><kwd>linguistic creativity</kwd><kwd>natural language processing</kwd><kwd>digital discourse</kwd></kwd-group></article-meta></front><back><ref-list><title>References</title><ref id="cit1"><label>1</label><citation-alternatives><mixed-citation xml:lang="ru">Dürscheid C. Grammatische und lexikalische Strukturen digital geschriebener Sprache // Handbuch Sprache und digitale Kommunikation / eds. J. Androutsopoulos, F. Vogel. Berllin – Boston: De Gruyter, 2024. P. 157–175. DOI: 10.1515/9783110744163-008.</mixed-citation><mixed-citation xml:lang="en">Dürscheid, C. (2024). Grammatische und lexikalische Strukturen digital geschriebener Sprache. In: Androutsopoulos, J. &amp; Vogel, F., eds. Handbuch Sprache und digitale Kommunikation. Berllin – Boston: De Gruyter, pp. 157–175. DOI: 10.1515/9783110744163-008.</mixed-citation></citation-alternatives></ref><ref id="cit2"><label>2</label><citation-alternatives><mixed-citation xml:lang="ru">The language of social media: Identity and community on the Internet / eds. P. Seargeant, C. Tagg. New York: Palgrave Macmillan, 2014. 272 p.</mixed-citation><mixed-citation xml:lang="en">Seargeant, P. &amp; Tagg, C., eds. (2014). The language of social media: Identity and community on the Internet. New York: Palgrave Macmillan.</mixed-citation></citation-alternatives></ref><ref id="cit3"><label>3</label><citation-alternatives><mixed-citation xml:lang="ru">Zappavigna M., Logi L. Emoji and social media paralanguage. Cambridge: Cambridge University Press, 2024. 296 p.</mixed-citation><mixed-citation xml:lang="en">Zappavigna,  M. &amp; Logi,  L. (2024). Emoji and social media paralanguage. Cambridge: Cambridge University Press.</mixed-citation></citation-alternatives></ref><ref id="cit4"><label>4</label><citation-alternatives><mixed-citation xml:lang="ru">Handbuch Sprache und digitale Kommunikation / eds. J. Androutsopoulos, F. Vogel. Berllin – Boston: De Gruyter, 2024. 588 p.</mixed-citation><mixed-citation xml:lang="en">Androutsopoulos,  J. &amp; Vogel,  F., eds. (2024). Handbuch Sprache und digitale Kommunikation. Berllin – Boston: De Gruyter.</mixed-citation></citation-alternatives></ref><ref id="cit5"><label>5</label><citation-alternatives><mixed-citation xml:lang="ru">Tagliamonte S. A. Teen talk: The language of adolescents. Cambridge: Cambridge University Press, 2016. 298 p.</mixed-citation><mixed-citation xml:lang="en">Tagliamonte, S. A. (2016). Teen talk: The language of adolescents. Cambridge: Cambridge University Press.</mixed-citation></citation-alternatives></ref><ref id="cit6"><label>6</label><citation-alternatives><mixed-citation xml:lang="ru">McCulloch G. Because internet: Understanding the new rules of language. New York: Riverhead Books, 2019. 336 p.</mixed-citation><mixed-citation xml:lang="en">McCulloch,  G. (2019). Because internet: Understanding the new rules of language. New York: Riverhead Books.</mixed-citation></citation-alternatives></ref><ref id="cit7"><label>7</label><citation-alternatives><mixed-citation xml:lang="ru">Thurlow C. Digital Discourse: Locating Language in New/Social Media // The SAGE Handbook of Social Media / eds. J. Burgess, T. Poell, A. Marwick. New York: Sage, 2018. P. 135–145. DOI: 10.4135/9781473984066.n8.</mixed-citation><mixed-citation xml:lang="en">Thurlow,  C. (2018). Digital Discourse: Locating Language in New/Social Media. In: Burgess,  J., Poell, T. &amp; Marwick, A., eds. The SAGE Handbook of Social Media. New York: Sage, pp. 135–145. DOI: 10.4135/9781473984066.n8.</mixed-citation></citation-alternatives></ref><ref id="cit8"><label>8</label><citation-alternatives><mixed-citation xml:lang="ru">On the dangers of stochastic parrots: Can language models be too big? / E. M. Bender, T. Gebru, A. McMillan-Major, S. Shmitchell // FAccT’ 21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. New York: Association for Computing Machinery, 2021. P. 610–623. DOI: 10.1145/3442188.3445922.</mixed-citation><mixed-citation xml:lang="en">Bender, E. M., Gebru, T., McMillan-Major, A. &amp; Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? In: FAccT’ 21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. New York: Association for Computing Machinery, pp. 610–623. DOI: 10.1145/3442188.3445922.</mixed-citation></citation-alternatives></ref><ref id="cit9"><label>9</label><citation-alternatives><mixed-citation xml:lang="ru">On the opportunities and risks of foundation models // R. Bommasani, D. A. Hudson, E. Adeli, R. Altman, S. Arora, S. von Arx, M. S. Bernstein et al. [Электронный ресурс] // ArXiv : [сайт]. URL: https://arxiv.org/abs/2108.07258 (дата обращения: 07.03.2025). DOI: 10.48550/arXiv.2108.07258.</mixed-citation><mixed-citation xml:lang="en">Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., Bernstein, M. S. et al. (2021). On the opportunities and risks of foundation models. In: ArXiv. URL: https://arxiv.org/abs/2108.07258 (accessed: 07.03.2025). DOI: 10.48550/arXiv.2108.07258.</mixed-citation></citation-alternatives></ref><ref id="cit10"><label>10</label><citation-alternatives><mixed-citation xml:lang="ru">BERT: Pre-training of deep bidirectional transformers for language understanding / J. Devlin, M.- W. Chang, K. Lee, K. Toutanova // Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol. 1 (Long and Short Papers). Minneapolis, Minnesota: Association for Computational Linguistics, 2019. P. 4171–4186. DOI: 10.18653/v1/N19-1423.</mixed-citation><mixed-citation xml:lang="en">Devlin, J., Chang, M.-W., Lee, K. &amp; Toutanova, K. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol. 1 (Long and Short Papers). Minneapolis, Minnesota: Association for Computational Linguistics, pp. 4171–4186. DOI: 10.18653/v1/N19-1423.</mixed-citation></citation-alternatives></ref><ref id="cit11"><label>11</label><citation-alternatives><mixed-citation xml:lang="ru">Bender E. M., Koller A. Climbing towards NLU: On meaning, form, and understanding in the age of data // Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Minneapolis, Minnesota: Association for Computational Linguistics, 2020. P. 5185–5198. DOI: 10.18653/v1/2020.acl-main.463.</mixed-citation><mixed-citation xml:lang="en">Bender, E. M. &amp; Koller, A. (2020). Climbing towards NLU: On meaning, form, and understanding in the age of data. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Minneapolis, Minnesota: Association for Computational Linguistics, pp.  5185–5198. DOI: 10.18653/v1/2020.acl-main.463.</mixed-citation></citation-alternatives></ref><ref id="cit12"><label>12</label><citation-alternatives><mixed-citation xml:lang="ru">Marcus G. Deep Learning: A Critical Appraisal [Электронный ресурс] // ArXiv : [сайт]. URL: https://arxiv.org/abs/1801.00631 (дата обращения: 07.03.2025). DOI: 10.48550/arXiv.1801.00631.</mixed-citation><mixed-citation xml:lang="en">Marcus,  G. (2018). Deep Learning: A Critical Appraisal. In: ArXiv. URL: https://arxiv.org/abs/1801.00631 (accessed: 07.03.2025). DOI: 10.48550/arXiv.1801.00631.</mixed-citation></citation-alternatives></ref><ref id="cit13"><label>13</label><citation-alternatives><mixed-citation xml:lang="ru">Revealing the dark secrets of BERT / O. Kovaleva, A. Romanov, A. Rogers, A. Rumshisky // Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Hong Kong, China. Association for Computational Linguistics, 2019. P. 4365–4374.</mixed-citation><mixed-citation xml:lang="en">Kovaleva, O., Romanov, A., Rogers, A. &amp; Rumshisky ,A. (2019). Revealing the dark secrets of BERT. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Hong Kong, China. Association for Computational Linguistics, pp. 4365–4374.</mixed-citation></citation-alternatives></ref><ref id="cit14"><label>14</label><citation-alternatives><mixed-citation xml:lang="ru">Acharjee S., Aich U., Ali A. Does Language Model Understand Language? [Электронный ресурс] // ArXiv : [сайт]. URL: https://arxiv.org/abs/2509.12459 (дата обращения: 03.03.2025).</mixed-citation><mixed-citation xml:lang="en">Acharjee, S., Aich, U. &amp; Ali, A. (2025). Does Language Model Understand Language? In: ArXiv. URL: https://arxiv.org/abs/2509.12459 (accessed: 03.03.2025).</mixed-citation></citation-alternatives></ref><ref id="cit15"><label>15</label><citation-alternatives><mixed-citation xml:lang="ru">Van Hout T. Book review: Jannis Androutsopoulos (ed.). Mediatization and Sociolinguistic Change (Linguae &amp; Litterae 36) // Journal of Sociolinguistics. 2015. Vol. 19. Iss. 5. P. 714–718. DOI: 10.1111/josl.12163.</mixed-citation><mixed-citation xml:lang="en">Van Hout,  T. (2015). Book review: Jannis Androutsopoulos (ed.). Mediatization and Sociolinguistic Change (Linguae &amp; Litterae 36). In: Journal of Sociolinguistics, 19 (5), 714–718. DOI: 10.1111/josl.12163.</mixed-citation></citation-alternatives></ref></ref-list><fn-group><fn fn-type="conflict"><p>The authors declare that there are no conflicts of interest present.</p></fn></fn-group></back></article>
