During the EFRAG SR TEG meetings in mid-September, I asked several people questions about the possible use of AI in the preparation of reports. They included representatives of companies (report authors), auditors (who will verify them), and banks and investment firms (who will use the information contained in them and make decisions based on it).
None of them identified a single task or process related to report creation in which conscious human action could be replaced by an LLM model. Nor did anyone identify a task or process in which AI could bring cost savings. However, the votes of individual groups (report creators, auditors, users) were interestingly distributed.
Certified public accountants were the most indifferent to the use of AI in report generation. For them, it really makes no difference whether the text of the report was written by a human, ChatGPT, or came to the author's mind during a walk in the park. The auditor's job is to verify a number of aspects of the report, such as whether we have selected the disclosures correctly, whether the report complies with standards, and whether the data it contains is accurate. Answers such as „we fed the AI model with a range of data and it gave us the impacts, risks, and opportunities that are relevant to us” are obviously insufficient for the auditor. We must be able to show, step by step, how a given process led to the determination of which impacts, risks, and opportunities are relevant, and large language models do not and cannot enable this.
Representatives of banks and investment firms are not concerned about the impact of artificial intelligence on the quality of information contained in reports, for the simple reason that reports are subject to external certification. They therefore believe that they receive data and information that has already been properly verified. At the same time, they express concern about the intellectual competence of company management boards, which in matters as fundamental as shaping the business model and strategy would rely on AI-based tools. If I use AI because sustainability issues are complex, I am admitting that I do not understand my company's impact on its environment, nor the risks and opportunities that this environment generates for me. What is more, I am depriving myself of the opportunity to understand them, which does not bode well for the future.
Representatives of the companies preparing the reports drew attention to three issues. One of them was the cost-benefit analysis. Creating preliminary drafts of report content using AI resulted in an increase in labor costs for the people who had to verify the content, to a level higher than if the content had simply been written from scratch by a human. The second issue was data and information security. Experts from large companies generally rule out the possibility of transferring data to any tools that they cannot be sure will not leave the company's premises. This, in turn, means that they rule out the use of open models, because we are unable to control what happens to the information entered into them. Finally, the third issue is that organizations would lose their understanding of sustainability issues if the work on reporting on them were done not by their employees, but by an external tool.
The latter issue raised by company representatives concerns not only the possible use of artificial intelligence, but also the use of external consulting services. In what cases is it justified, how much work on reporting or sustainable development management can be outsourced without incapacitating your company and depriving it of opportunities for growth? This is a very important issue, but one to be discussed in a future newsletter.



