More data can be found at aistatistics.ai, Hostinger, Planable, Fortune Business Insights and FF.co, among others. Among the most recognisable and versatile AI models are: GPT-4o from OpenAI, Claude (Anthropic), Gemini (Google DeepMind), Chinese models DeepSeek and Qwen, Grok (xAI, Elon Musk) and Meta Llama (Meta AI). There is also a free language model, BIELIK, available in the public domain.
The growing number of AI users is bringing the topic increasingly into the public debate. According to Stanford University's 2024 report, 55% of global respondents rated AI products and services as more beneficial than harmful – up from 52% in 2022. At the same time, there is also a growing number of people expressing concerns about ethics, privacy, cyber security, potential system bias, disinformation or interference with individual freedom.
We have all heard the question: will AI replace my profession in 2, 5, 10 or 15 years' time? It would be unreasonable to assume that this will not happen in any case.
So how do we avoid becoming mere consumers of AI “fast-food”?
Let's start with an observation from academic year 2024/2025. It is apparent that students are not making the most of the potential of creative question preparation (so-called prompting). Often, a question asked by a lecturer is simply pasted into chatbots, without context, resulting in the simplest and least valuable answer. Meanwhile, it would be sufficient to inform the AI model that it is to behave like an expert in the field, who has read specific sources (e.g. book 1 and 2), and its task is to explain a complex issue to another expert. This approach changes the quality of the answers obtained. It is also useful to test different versions of prompts with friends – thus learning to think, experiment, and building AI collaboration skills across the group.
For more complex tasks, so-called prompt chaining can be used – a technique in which the different stages of an AI conversation are linked and dependent on each other. This allows you to create longer, more contextual interactions, divide complex problems into stages and better control the responses.
It is also possible to use personalisation features such as the “Custom Instructions, Code Interpreter, Image input, Bio tool” available in ChatGPT, which allow the user to specify their own preferences in terms of style, tone of speech or topic coverage. Such possibilities make interaction with AI more tailored to individual needs and goals. In this context, there is increasing talk of the phenomenon of “AI-powered people”, i.e. individuals and organisations that can effectively use AI to increase efficiency, make more accurate decisions and improve the quality of their work.
With AI, we can:
· automate routine tasks, gaining more time for creative activities,
· use data analytics for faster and more accurate decision-making,
· develop our competences and increase our productivity,
· focus on aspects of our work that require emotional intelligence, innovation and the ability to solve complex problems.
Consumer or creator?
Significantly, at the current stage of technology development you do not even need to know how to program to embark on the adventure of experimenting with AI. Thanks to lightweight, open-source (open-source) language models, you can download and run a local LLM yourself on your own computer and then “teach” it the content of your chosen book to ask questions and analyse the content. Popular models include Llama 3 (Meta) in versions 8B, 13B and 7B, Phi-3 (Microsoft), Gemma (Google) and TinyLlama. Their operation is made possible by easy-to-use tools such as Ollama (for Windows, Mac and Linux), LM Studio (with a friendly graphical interface) or GPT4All. With LM Studio and GPT4All, the user can load a text file (e.g. in .txt or .pdf format) and then have a conversation with the model based on the information it contains. Such hands-on experiments, which include preparing answers to the lecturer's questions based on a specific book, provide a better understanding of how AI works, what data-driven learning is all about and how the quality of the input content provided is crucial.
Often, what AI models lack in order to provide the answer we want is to set the question in a broader context — one that we have access to, but that AI does not need to know. Moreover, when a topic is poorly documented, deals with non-existent facts or lacks clear references, the language model starts to confabulate. The answers may then sound convincing, but are in fact incorrect or even false. In such cases, the language model “guesses” the answer, as it were, based on similar phrases or situations encountered during training, instead of basing it on facts. A critical look at such an answer, however, requires the user to have at least a basic knowledge of the area in question. Therefore, it is still a good idea to consult old-school academic textbooks — not to memorise every sentence, but to understand the wider context and gain a general understanding of the topic. Only then is it worth condensing the knowledge into key points useful for the exam with the help of AI.
As Professor Adam Wojciechowski aptly pointed out in his article, AI is still a knife in the hands of a child, the university should research and popularise the mechanisms of artificial intelligence, and we should effectively use them and develop competence in this field. The key question is: do we want to be mere consumers of this technology or also its creators?
In a world where AI tools are becoming more powerful, the gap between those who merely consume content and those who create it – will widen.