- PII
- S2686673025040071-1
- DOI
- 10.31857/S2686673025040071
- Publication type
- Article
- Status
- Published
- Authors
- Volume/ Edition
- Volume / Issue number 4
- Pages
- 93-106
- Abstract
- The purpose of the article is to examine how well-known scientists assess the potential long-term risks of artificial intelligence (AI). It examines their contribution to the development of generative AI technologies, as well as the advocacy for safe AI by the famous CanadianBritish scientist, 2024 Nobel Prize winner Jeffrey Hinton, and his students and colleagues in the field of machine learning, who share his ideas or, on the contrary, oppose him (I. Sutskever, J. Bengio, J. LeCun). The warnings of scientists, programmers, and entrepreneurs are (G. Hinton, J. Bengio, I. Sutskever, S. Balaji, E. Musk, and others) about the uncontrollable consequences of the competitive race between high-tech companies such as OpenAI, Google, Anthropic, Microsoft, as well as their advocacy for new regulatory approaches in this area, are described. The article shows how researchers are forced to come into conflict with the interests of technology corporations in order to convey information about the risks of AI to a wide range of people.
- Keywords
- искусственный интеллект генеративный искусственный интеллект машинное обучение «Чат Джи-пи-ти» Дж. Хинтон Й. Бенджио И. Суцкевер С. Баланджи экзистенциальные риски
- Date of publication
- 08.11.2025
- Year of publication
- 2025
- Number of purchasers
- 0
- Views
- 46
References
- 1. Geoffrey E. Hinton. Available at: https://www.cs.toronto.edu/~hinton/ (accessed 23.12.2024).
- 2. The Vector Institute for Artificial Intelligence. Available at: https://vectorinstitute.ai (accessed 28.12.2024).
- 3. Gartner: генеративный ИИ близок к «избавлению от иллюзий». itWeek.
- 4. 22 августа. Available at: https://www.itweek.ru/ai/article/detail.php?ID=230226 (accessed: 7.01.2025).
- 5. Metz, C. Silicon Valley’s Safe Space. The New York Times. 2021. 13 February. Available at: https://www.nytimes.com/2021/02/13/technology/slate-star-codex-rationalists.html (accessed 10.01.2025).
- 6. Metz, C. The Godfather of A.I.2 Leaves Google and Warns of Danger Ahead. The New York Times. 2023. 4 May. Available at: https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html?ysclid=m1br4868ad234874607 (accessed 3.01.2025).
- 7. Джеффри Хинтон. Интервью в Vector Institut в Торонто. О технологии, которая уже меняет мир. 2024. 5 ноября. Available at: https://rutube.ru/video/7d04ece462e7d0f17fbe5768d3639f9a/?ysclid=m5o8vmu0125 8862923 (accessed 4.01.2025).
- 8. SB 1047 (SB-1047 Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (2023-2024). Available at: https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240SB1047 (accessed 3.01.2025).
- 9. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. White House. 2023. 30 October. Available at: https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificialintelligence/ (accessed 3.01.2025).
- 10. Mims Ch. This AI Pioneer Thinks AI Is Dumber Than a Cat. WSJ. 2024. 11 October. Available at: https://www.wsj.com/tech/ai/yann-lecun-ai-meta-aa59e2f5 (accessed 3.01.2025).
- 11. Пашенцев Е. Злонамеренное использование ИИ и информационно-психологическая безопасность: будущие риски. РСМД. 2024. 9 July. Available at: https://russiancouncil.ru/analytics-and-comments/analytics/zlonamerennoe-ispolzovanie-ii-i-informatsionno-psikhologicheskaya-bezopasnost-budushchieriski/?sphrase_id=166794964 (accessed 12.13.2025).
- 12. Banias, M.J. Inside CounterCloud: A Fully Autonomous AI Disinformation System. The Debrief. 2023, 16 August. Available at: https://thedebrief.org/countercloud-ai-disinformation/ (accessed 12.01.2025).
- 13. Алипова Е. Илья Суцкевер прогнозирует непредсказуемость сверхразумного ИИ. RB. 2024. 14 December. Available at: h ttps://rb.ru/news/suckevernepredskaz/?ysclid=m4oatg9ir3926785364 (accessed 3.01.2025).
- 14. Metz, C. Former OpenAI Researcher Says the Company Broke Copyright Law. The New York Times. 2024. 23 October. Available at: https://www.nytimes.com/2024/10/23/technology/openai-copyright-law.html (accessed 3.01.2025).
- 15. Pause Giant AI Experiments: An Open Letter. Future of Life Institute. Available at: https://futureoflife.org/open-letter/pause-giant-ai-experiments/ (accessed 6.01.2025).
- 16. OpenAI Fires Back at Elon Musk’s Lawsuit. The New York Times. 2024. 13 December. Available at: https://www.nytimes.com/2024/12/13/technology/openaielon-musk-lawsuit.html (accessed 6.01.2025).
- 17. Heath A. Meta asks the government to block OpenAI’s switch to a for-profit. The Verge. 2024. 14 December. Available at: https://www.theverge.com/2024/12/13/24320880/meta-california-ag-letter-openai-non-profit-elonmusk (accessed 6.01.2025).
- 18. Working together on the future of AI. Association for the Advancement of Artificial Intelligence. 2023. 5 April. Available at: https://aaai.org/working-together-on-the-future-of-ai/ (accessed 6.01.2025).
- 19. Лекун Я. 2021. Как учится машина. Революция в области нейронных сетей и глубокого обучения. М. Издательство «Альпина ПРО». 335 с. Available at: https://www.litres.ru/book/yan-lekun/kak-uchitsya-mashina-revoluciya-v-oblastineyronnyh-setey-i-glub-66361966/chitat-onlayn/?ysclid=m5twltns6c958014472 (accessed 6.01.2025).
- 20. Сташис В. 2020. Мифологизация реальности в современном информационном пространстве. Веснік Брэсцкага ўніверсітэта. Серыя 1. Філасофія. Паліталогія. Сацыялогія. № 2. с. 67-72.
- 21. Соколова М.Е. 2022. От автобеспилотников до автономных дронов: диапазон социально-этических рисков. Россия и Америка в XXI веке. № 5. Available at: https://rusus.jes.su/s207054760022754-6-1/ (accessed 6.01.2025). DOI: https://doi.org/10.18254/S207054760022754-6.