Bureaucracy can act as a drag on artificial intelligence
Mr. Bauer, while the USA has recently produced the mega corporations of the digital age, we are stuck in the industrial age, as a glance at the Dax shows. Is Germany now also gambling away its AI future?
No, not yet. Germany doesn't have to hide, and the situation shouldn't be talked down either. Of course, currently, everybody talks about providers of major AI language models such as OpenAI with ChatGPT, Google with Gemini or Meta with Meta AI. But we have the brains – especially when it comes to the scientific foundations. So there is certainly potential for new breakthroughs to come from Germany in the large models, such as TabPFN. Decisive progress is also being made in special applications where AI variants are used. This is where we could be able to show off the experience of German companies in the future.
So where are the hotspots in Germany that could one day grow into digital billion-dollar corporations?
In addition to Berlin, Munich is also very much in the running, as is Frankfurt with its Rhine-Main connection via Darmstadt and Mainz. However, we must now ensure that the necessary conditions are in place for the products to develop. Top infrastructure with powerful and easily accessible data centres, fairly flat regulation with little bureaucracy and simplified data protection regulations – above all not additionally complicated by national borders – and, of course, rapid financing so that the startup can grow in the market.
A key obstacle to the growth financing of AI startups in Germany is the limited international visibility.
Why is growth financing not really working out in this country?
A key obstacle to the growth financing of AI startups in Germany is the limited international visibility. Many excellent projects are created here, but often do not reach the global stage that attracts the attention of investors.
What needs to be done?
Increased public funding for applied AI research, training and, in particular, the transfer into practice would be an important signal. Targeted investment, including the financing of startups in the early stages, not only shows commitment, but can also mobilise private investment. Especially in economically challenging times, such a commitment from the public sector is a strong signal, and can help to strengthen confidence in Germany as a location for innovation.

How does bureaucracy hinder research and development?
Bureaucracy acts as a drag on research and development – especially when it comes to AI. A key problem here is not data protection per se, but how it is often interpreted over-cautiously in Germany. It gives the impression that everyone involved in the approval process primarily wants to protect themselves – for fear of being prosecuted later. This culture of risk avoidance prevents many data-based innovation projects right from the start.
A key problem here is not data protection as such, but how it is often interpreted over-cautiously in Germany.
So where should we start politically?
What is needed is a change in mentality: data protection yes, but with a sense of proportion and a focus on innovation. Above all, employee representatives in large companies should be involved at an early stage. This will not only make it easier to address risks, but also to communicate the benefits of data-based research more clearly for everyone involved. After all, many of the most relevant scientific findings in AI are the result of close cooperation with companies – for example on the basis of shared data or empirical field studies. For such partnerships to work, we need fewer bureaucratic hurdles and greater involvement of all stakeholders based on trust.
What role does the EU's AI Act play here? A bureaucratic monster? Or a starting point for business start-ups?
The basic idea of the AI Act makes perfect sense: focusing on safety and transparency in the use of AI is right and important, especially when it comes to sensitive or high-risk applications. However, there is a risk that the AI Act in its current form will lead to uncertainty.
If regulation is not carried out with a sense of proportion, it can quickly become a brake on innovation.
Where is this happening, specifically?
This is particularly the case for companies and startups that are wondering how the rules should be interpreted in detail, and what they will have to expect. If regulation is not carried out with a sense of proportion, it can quickly become a brake on innovation. For business startups in the AI sector in particular, it is crucial that the AI Act is understood as a clear and practical basis. This now requires good implementation and, above all, guidance that helps to create legal certainty without stifling the entrepreneurial spirit.
Many companies are expecting a leap in productivity thanks to AI. Do you agree?
Yes, a leap in productivity through AI is visible, at least in certain areas. We are currently seeing significant efficiency gains, especially in recurring tasks and specific programming work. Some scientific studies speak of productivity increases of 20 to 30%, even at conservative estimates. In the long term, not only will efficiency increase, but the way we work will also change fundamentally. Processes will have to be rethought and tasks restructured. AI should not be seen as a replacement, but as a partner. It can take on tasks independently or support us with valuable input, for example for better decisions or higher creative productivity.
So a kind of life companion?
The aim should be human-centered or responsible AI, i.e. artificial intelligence that serves people, makes life easier, and is used responsibly without creating excessive risks.
There is a real danger that we will hand over too much responsibility to the AI systems and lose our own cognitive abilities in the process because we no longer think through processes ourselves.
Where do you see the benefits or risks of AI applications revolutionising the business world?
AI has the potential to make decision-making and business processes significantly more efficient. At the same time, however, there is a risk that we will hand over too much responsibility to the systems. If we increasingly leave decision making to AI, we will gradually lose cognitive abilities because we will no longer think through processes ourselves. I also see a central problem in the fact that we often structure our decisions in such a way that they can be processed as well as possible by AI systems. In this way, we adapt to the technology instead of integrating it into our thought and work processes. We must always critically reflect on the results that AI delivers and not simply adopt them. Humans remain responsible for what is ultimately decided.
Give us an example.
If a recommendation system pre-sorts applicants, this must not mean that we no longer look at the proposals ourselves. We need to understand why certain decisions were made, otherwise there is a risk of a gradual loss of competence.
One particularly exciting use case of AI in the financial industry is the democratisation of financial knowledge.
Computer scientists see the development of large AI language models at a dead end at the moment because they have to access more and more AI-generated material. Where do you see the dangers here?
In fact, there currently is a danger that the models are overly reliant on synthetic data generated by AI itself. This leads to a certain self-referentiality. This could also lead to a collapse in model performance. But here, too, it is necessary to point out the core of AI: machine learning. They can show their full potential in concrete applications beyond the large language models, which are based on real data from medicine or the mechanical engineering of companies, for example. And this is what Germany needs to focus on.
Where do you see specific use cases for the financial industry?
There is indeed great potential for meaningful use here. One particularly exciting use case is the democratisation of financial knowledge. AI could greatly simplify access to complex information. Individual AI agents for bank customers are conceivable. These would analyse financial information, tailor it to individual needs, and make suggestions for sensible investments. Other ideas range from AI-supported investment advice and automated risk analyses, to regulatory applications for evaluating large volumes of data for compliance purposes.