Welcome to InsurLab Germany: Could you please briefly introduce Calvin Risk?
Calvin Risk is not your typical AI service provider. We don't develop chatbots or paint visions of the future in pastel colors. We ask uncomfortable questions: What happens if your AI makes the wrong decision? Who is responsible if it doesn't work as planned? Our job is to test artificial intelligence hard, make risks visible and set up governance structures before an innovation becomes a headline. We work where technology, regulation and reputational protection meet - and sometimes collide.

Julian Riebartsch, CEO & Co-Founder
How did you come to join InsurLab and why did you decide to become a member?
We are convinced that the insurance industry is at a turning point. AI can revolutionize business models - or ruin them.
For us, InsurLab is the place to come together with people who don't just follow trends, but are willing to engage in critical debate and develop real solutions. We joined because we believe that progress only works if you think innovation and security at the same time.
What topics and content are important to you? How do you feel about our 2025 focus topics „Operational excellence and business process management“ and „AI scaling and application“?
Operational excellence is not a „nice to have“ - it is the safety net that prevents complex AI-supported processes from falling into chaos.
Business process management needs to be rethought when AI is part of decision making - with clear test and control points. AI scaling excites us - but only if organizations are ready to identify, measure and manage risk at the same pace. We provide the control, test and verification layer that enables AI to be scaled, auditable and cost-effective in your business processes.
Which of our activities are you planning to get involved in? What impetus would you like to bring to the InsurLab network - and what do you hope to get in return?
We want to provide impetus that is uncomfortable but necessary:
- How do you test an AI that acts autonomously?
- What happens when the best use case becomes the biggest risk?
- How do you create trust in black box systems?
We contribute experience, testing methods and governance know-how and, in return, expect sparring partners who are prepared to engage in courageous discussions and jointly establish new standards for secure AI management.
Julian Riebartsch and Anastasia Movcharenko, thank you very much for the interview! We look forward to working with you.
